CN114298126A - Brain function network classification method based on condition mutual information and kernel density estimation - Google Patents

Brain function network classification method based on condition mutual information and kernel density estimation Download PDF

Info

Publication number
CN114298126A
CN114298126A CN202111245109.8A CN202111245109A CN114298126A CN 114298126 A CN114298126 A CN 114298126A CN 202111245109 A CN202111245109 A CN 202111245109A CN 114298126 A CN114298126 A CN 114298126A
Authority
CN
China
Prior art keywords
causal
bfn
function network
brain
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111245109.8A
Other languages
Chinese (zh)
Other versions
CN114298126B (en
Inventor
冀俊忠
王飞鹏
刘金铎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202111245109.8A priority Critical patent/CN114298126B/en
Publication of CN114298126A publication Critical patent/CN114298126A/en
Application granted granted Critical
Publication of CN114298126B publication Critical patent/CN114298126B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a brain function network classification method based on condition mutual information and nuclear density estimation, which estimates typical brain causal networks of different groups according to fMRI time sequences after pretreatment of the different groups, and reserves functional connection between ROIs with large causal connection difference; the nuclear density estimation is utilized to obtain the difference of the probability density of the strength of the same functional connection under different labels, so that the causal strength between different functional connections and labels is obtained, and accordingly, the functional connection with strong causal strength of the labels can be amplified, and the functional connection with weak causal strength of the labels can be reduced; and fusing BFN obtained from the preprocessed fMRI time sequence by using the Pearson correlation coefficient with causal knowledge obtained in the first two steps to obtain BFN with rich classification information, and sending the BFN into the BrainNetCNN for classification. Causal knowledge can be respectively extracted from the preprocessed fMRI time sequence and the BFN by utilizing condition mutual information and nuclear density estimation, and the causal knowledge is fused into the original BFN to obtain the BFN with abundant classification information.

Description

Brain function network classification method based on condition mutual information and kernel density estimation
Technical Field
The invention relates to a causal knowledge extraction and knowledge fusion method of fMRI functional magnetic resonance imaging data, and designs a brain function network classification model based on conditional mutual information and nuclear density estimation aiming at a computer-aided brain disease diagnosis target based on fMRI.
Background
A Brain Functional Network (BFN) is typically constructed from functional magnetic resonance imaging (fMRI) data, which can reveal patterns of brain functional activity. Specifically, the BFN is comprised of nodes, each of which corresponds to a brain region of interest (ROI), and edges, each of which represents a functional connection between the ROIs. Studies have shown that brain diseases are often associated with abnormalities in the Functional Connectivity (FC) of BFN, and that the classification of BFN has been successfully applied to computer-aided diagnosis (CAD) of many brain diseases, such as Alzheimer's Disease (AD), Depression (Depression), and Autism Spectrum Disorder (ASD), among others.
In recent years, more and more machine learning methods have been successfully applied in the classification of BFNs. These methods can be broadly divided into two categories: a conventional machine learning (TML) method and a Deep Learning (DL) method. TML methods include Support Vector Machines (SVMs), Logistic Regression (LR), and Random Forest (RF), among others. The methods benefit from the characteristics of the shallow model and have the advantages of simple training, high time efficiency and the like. However, they cannot automatically learn and extract features, resulting in poor classification performance. In recent years, some DL methods including full-connected cascade artificial neural network (FCC-ANN), Convolutional Neural Network (CNN), and graph convolutional neural network (GCN) have also been used for BFN classification. DL methods have better performance than TML methods because they can efficiently extract high-level features from neuroimaging data through depth structure. However, the black-box nature of the DL methods makes it impossible for users to understand how these methods classify particular classes, which prevents their widespread use in the medical industry under strict supervision because errors are costly.
Currently, causal learning methods that infer causal relationships from data are attractive and have become powerful tools for understanding the mechanisms inherent in complex models. In particular, previous studies have shown that the causal network of the brain is important for assessing brain function, and its damage is closely related to brain disease. Therefore, the utilization of a causal learning method to assist BFN classification not only can improve the performance of the classification method, but also can help people to understand the intrinsic mechanism of the method.
Disclosure of Invention
Aiming at the problem that the performance of a TML method is insufficient and the interpretability of a DL method is poor in the current brain function network classification, the invention provides a brain function network classification model based on conditional mutual information and kernel density estimation (CMI-KDE). The model fully utilizes the strong interpretability of causal knowledge and the characteristic that a brain causal network is closely related to a function network, firstly extracts causal connection between ROIs through Conditional Mutual Information (CMI) and Partial Correlation (PC), then utilizes Kernel Density Estimation (KDE) to obtain the causal strength between input function connection and labels, and finally fuses the causal knowledge obtained in the previous two steps with the BFN to obtain the BFN with rich classification information and sends the BFN into BrainNetCNN (a convolutional neural network designed for brain network classification) for classification.
The main idea for realizing the invention is as follows: estimating different groups of typical brain causal networks according to different groups of preprocessed fMRI time sequences, removing unnecessary functional connections in the functional networks by using the differences of different groups of tested typical brain causal networks, and only reserving the functional connections among ROIs with large differences of causal connections; the nuclear density estimation is utilized to obtain the difference of the probability density of the strength of the same functional connection under different labels, so that the causal strength between different functional connections and labels is obtained, and accordingly, the functional connection with strong causal strength of the labels can be amplified, and the functional connection with weak causal strength of the labels can be reduced; and fusing BFN obtained from the preprocessed fMRI time sequence by using the Pearson correlation coefficient with causal knowledge obtained in the first two steps to obtain BFN with rich classification information, and sending the BFN into the BrainNetCNN for classification.
A brain function network classification model based on conditional mutual information and nuclear density estimation comprises the following steps:
(step one) data acquisition: to verify the validity of the model proposed by the present invention, we will perform experiments on the ABIDE I dataset to evaluate the classification performance of the model.
(step two) selecting a region of interest (ROI): the invention adopts the most traditional AAL (automated atomic laboratory) template, and takes the average value of all voxels in the ROI as the signal of the ROI.
(step three) constructing a brain function network: the brain function network is generally constructed by using a correlation-based method, and the invention adopts the most common means and Pearson correlation coefficient to construct the brain function network.
(step four) estimation of causal links between ROIs: according to the concept of two-stage, we first get the skeleton of the causality graph using a PC, and then determine the orientation of the causality graph using a CMI that can capture the complex time dependencies between input sequences.
(step five), estimating the causal strength between the functional connection and the label: and (3) estimating the probability density distribution of the strength of the input functional connection under different labels by using KDE, and taking the probability density difference as the estimation of the causal strength between the functional connection and the labels.
And (step six), fusing the causal knowledge of the step four and the step five with BFN to obtain BFN with rich classification information, and inputting the BFN into BrainNetCNN for classification.
Compared with the prior art, the invention has the following obvious advantages and beneficial effects;
(1) a new model for brain function network classification based on conditional mutual information and nuclear density estimation is provided, and the model has excellent classification performance compared with most TML or DL methods.
(2) The method can respectively extract causal knowledge from two aspects of the preprocessed fMRI time sequence and the BFN by utilizing condition mutual information and nuclear density estimation, and the causal knowledge is fused into the original BFN to obtain the BFN with rich classification information.
(3) The present invention can give a reasonable causal interpretation of the results obtained from the DL method based on causal knowledge.
(4) The experimental results on ABIDE I show that the invention can provide an alternative auxiliary means for medical researchers to analyze the pathogenesis of brain diseases.
Drawings
FIG. 1: a flow chart of a model involved in the method.
FIG. 2: a flow chart for estimating causal connections between a group of ROI under test using CMI and PC.
FIG. 3: trend plot of the BrainNetCNN evaluation index each time one feature of the ten aliquots was removed.
Detailed Description
The following explains the specific embodiments and detailed steps of the present invention, and the flow chart of the specific implementation of the present invention is shown in fig. 1, and specifically includes:
(step 1) data acquisition.
To verify the validity of the model proposed by the present invention, we will perform experiments on the ABIDE I dataset to evaluate the classification performance of the model. ABIDE I possesses functional and structural brain imaging data from 17 different sites around the world. Since data processing for fMRI is very flexible, the Preprocessed ConnectoceptomeProject (http:// Preprocessed-connected objects. org/side /) provides five types of data that are Preprocessed by different teams using their preferred strategy. We selected a version of DPARSF containing a total of 505 ASD patients and 530 normal subjects after the subjects with incomplete phenotypic information were removed.
(step two) selecting a region of interest (ROI).
The invention adopts the most traditional AAL (automated atomic laboratory) template, and takes the average value of all voxels in the ROI as the signal of the ROI. Specifically, the AAL template has a total of 116 brain regions, of which 90 brain regions are present and 26 cerebellar regions are present. Since numerous studies have shown that mental disorders of the human brain are mainly related to the brain and the cerebellum mainly controls basic physiological activities of the human body, the present invention considers only 90 brain regions in the AAL template.
(step three) constructing a brain function network.
The brain function network is generally constructed by using a correlation-based method, and the invention adopts the most common means and Pearson correlation coefficient to construct the brain function network. For brain regions i and j, we calculate the Pearson correlation coefficient r (x) using the following formulai,xj):
Figure RE-GDA0003522342460000051
Where cov (,) represents the covariance between the two variables and σ represents the standard deviation of the variables.
(step four) estimation of causal links between ROIs:
according to the concept of two-stage, we first get the skeleton of the causality graph using a PC, and then determine the orientation of the causality graph using a CMI that can capture the complex time dependencies between input sequences. The whole flow is shown in fig. 2. Next, we will describe the specific processes of PC and CMI in turn.
Is provided with Z/(i,j)=[χ1,...,χN,I]/[χi,χj]Is the vector xiAnd chijWherein I is a unit vector, χiIs the super-ROI signal (multiple ROI signals concatenated) for the ith ROI under test with all labels equal to y, N is the number of ROIs, here 90. Then we use Z/(i,j)Fitting super ROI signal χ by linear regressioniThe resulting regression coefficients are shown below:
Figure RE-GDA0003522342460000061
wherein < > represents a linear regression function. After that we can get Z/(i,j)Fitting chiiResidual error R ofiComprises the following steps:
Ri=χi-<ωi,Zi.j>(3)
similarly, we can also obtain Z/(i,j)Fitting chijResidual error of
Figure RE-GDA0003522342460000062
The element p of the partial correlation matrix p (y) can be obtained according to equation (1)i,j(y) has a value of r (χ)i,χj)。
While for a group subject with a label equal to y, the CMI from ROI i to ROI j can be written as:
Figure RE-GDA0003522342460000063
wherein
Figure RE-GDA0003522342460000064
Or
Figure RE-GDA0003522342460000065
Representing a super vector consisting of the first or second half fMRI time series of the jth ROI of the group tested with the label equal to y. Formula (4) is equivalent to:
Figure RE-GDA0003522342460000066
the calculation of CMI is translated into an estimate of time series (continuous variable) entropy. For vectors sampled from a continuous variable a
Figure RE-GDA0003522342460000067
In other words, the differential entropy is as follows:
Figure RE-GDA0003522342460000068
wherein f (a) is the variable a in space
Figure RE-GDA0003522342460000069
Log is the natural logarithm. Unbiased estimation if we have a logf (a)
Figure RE-GDA00035223424600000610
Then it can be estimated according to
Figure RE-GDA00035223424600000611
Entropy of (d):
Figure RE-GDA0003522342460000071
where l is a vector
Figure RE-GDA0003522342460000072
Length of (d). And we can get from Kozachenko-Leonenko differential entropy estimation:
logf(ai)≈ψ(k)-ψ(l)-daElog(∈)-log(Vda)(8)
wherein d isaIs the dimension of the variable a, VdaIs daDimension the volume of the unit sphere, epsilon is aiThe distance to its k-th near neighbor, ψ is a digamma function. The distance is measured by Maximum, and k is 1, so that we can finally obtain the following formula (7) and (8):
Figure RE-GDA0003522342460000073
wherein e (i) represents aiDistance to its k-th near neighbor. So that it is possible to obtain:
Figure RE-GDA0003522342460000074
Figure RE-GDA0003522342460000075
Figure RE-GDA0003522342460000076
after obtaining the PC matrix p (y) representing the skeleton of the cause and effect diagram and the CMI matrix m (y) representing the direction of the cause and effect diagram, we can fuse the two to obtain the cause and effect matrix c (y) of the tested group with the label equal to y, and the elements thereof are denoted as ci,j(y):
Figure RE-GDA0003522342460000081
(step five) estimating causal strength between the functional connection and the label.
The causal strength between the functional link and the tag is estimated to assess the importance of the feature. It can be used to enlarge important features and to reduce unimportant features, thus reducing intra-group gaps and enlarging inter-group gaps. The invention chooses to use the difference of probability density of features under different labels to estimate causal strength between functional connections and labels. Because the probability density differences may describe a more complex relationship between them than pearson correlation-based methods, etc. The key to this step is the estimation of the continuous variable probability density.
The general form of the non-parametric probability density estimate can be written as:
Figure RE-GDA0003522342460000082
where f is the probability density of the variable a and P represents
Figure RE-GDA0003522342460000083
The probability of (c). Is provided with repetition
Figure RE-GDA0003522342460000084
Sub-sampling, wherein k times fall in
Figure RE-GDA0003522342460000085
A binomial distribution can be obtained:
Figure RE-GDA0003522342460000086
to utilize
Figure RE-GDA0003522342460000087
P is estimated to have unbiased and consistent properties. Provided with f (a) at
Figure RE-GDA0003522342460000088
Very small is constant, then:
P=∫f(a)da≈f(a)v(16)
wherein v is
Figure RE-GDA0003522342460000089
The volume of (a). From equations (15) and (16), it can be obtained:
Figure RE-GDA0003522342460000091
the invention chooses a fixed v to infer k from the data to estimate f (a), i.e., KDE. In order to obtain a smooth probability density function, the present invention introduces a gaussian kernel function:
Figure RE-GDA0003522342460000092
where d is the dimension of the input vector U. Combining equations (17) and (18), the probability density of variable a can be estimated by:
Figure RE-GDA0003522342460000093
where h is the bandwidth of the KDE. Determined by the Scott rule. Let equation (19) be gau _ kde, we can estimate the probability distribution density of the functional connection strength between ROI i and j when the label is equal to y according to the following equation:
Figure RE-GDA0003522342460000094
wherein
Figure RE-GDA0003522342460000095
Is a brain function network matrix EkIs a function of concatenating the input elements into a vector. The difference in probability distribution of functional linkage strength between ROIs i and j under different labels is therefore as follows:
Figure RE-GDA0003522342460000096
wherein d isi,jAre elements of the probability density difference matrix D.
(step six) fusing and classifying the causal knowledge and BFN.
By the two steps, different groups of causal connection matrixes C (y) can be obtained respectively0) And C (y)1) And inputting a probability density difference matrix D of the features under different labels. First we will consider C (y)0) And C (y)1) And (3) obtaining an absolute difference matrix by difference:
Figure RE-GDA0003522342460000101
wherein ei,jRepresenting the difference in causal links from ROI i to j under different labels. To correspond to the input brain function network matrix, we use ei,jAnd ej,iTo evaluate the difference in functional connectivity between ROIs i and j under different labels:
hi,j=∈i,j+∈j,i(23)
wherein h isi,jAre elements of the sparse constraint matrix H. After obtaining H, we proceed with the first m most different connections by keeping themAnd (5) line binarization processing. We then normalize the density difference matrix D to obtain a connection scaling matrix:
Figure RE-GDA0003522342460000102
where avg (.) represents the mean function. Next, we obtain a weight matrix of the brain function network by dot-product fusion matrices H and Z:
W=H·Z(25)
finally, the three matrixes obtained from the two sections are fused to obtain a weight matrix of the brain function network. The matrix can be fused with any brain function network matrix by the following formula and sent into the brain netcnn classification:
y=BrainNetCNN(W·Ek)(26)
in order to fully verify the superiority of the method, the CMI-KDE of the invention is compared with the existing methods such as RF, LR, SVM, CNN, BrainNetCNN, CKEW and the like in ABIDE I, and results are evaluated by using six evaluation indexes such as Acc, Recall, Precision, F1, bAcc and AUC, wherein the six evaluation indexes are widely applied to quantitative evaluation of classification tasks. We randomly divided the ABIDE I dataset into training, testing and validation sets on an 8:1:1 ratio and repeated ten times. The results are shown in table 1 as means ± standard deviation.
TABLE 1 comparison of methods on ABIDE I data set
Method Acc(%) Recall(%) Precision(%) F1(%) bAcc(%) AUC(%)
RF 60.18±2.93 53.98±8.03 60.82±4.99 56.64±4.20 60.41±2.78 65.38±2.61
LR 62.61±4.02 61.63±7.61 62.01±4.29 61.47±4.28 62.88±4.02 67.55±2.85
SVM 62.71±2.05 58.84±7.12 62.88±3.41 60.39±2.94 62.93±2.00 68.51±1.56
CNN 62.23±2.39 57.14±4.74 61.01±2.54 58.93±3.26 62.00±2.45 68.20±2.18
BrainNetCNN 65.15±2.86 59.59±4.06 64.46±3.06 61.90±3.34 64.89±2.89 69.72±2.41
CKEW 67.28±1.44 57.55±4.45 68.60±0.94 62.51±2.72 66.83±1.57 73.43±1.60
CMI-KDE(ourse) 70.39±2.22 66.94±3.51 69.72±2.76 68.24±2.49 70.23±2.22 74.12±1.60
From the table above, we can see that the six evaluation indexes of our model are significantly higher than those of the other six comparison methods, and even reach 70.39% and 74.12% respectively on Acc and AUC, so that the model has great application potential. To further verify the effectiveness of each part of the model provided by the invention, an ablation experiment is carried out under the same condition, wherein FC represents that a brain function network is constructed only by using Pearson correlation coefficients, KDE represents that causal knowledge between functional connections and labels is integrated into the brain function network, CMI represents that causal knowledge between ROIs is integrated into the brain function network, and CMI-KDE represents the whole model of the invention. The results obtained are shown in Table 2.
TABLE 2 ABIDE I data set Upper ablation experiment
Method Acc(%) Recall(%) Precision(%) F1(%) bAcc(%) AUC(%)
FC 65.15±2.86 59.59±4.06 64.46±3.06 61.90±3.34 64.89±2.89 69.72±2.41
KDE 66.41±2.85 64.29±4.49 64.85±2.96 64.50±3.32 66.31±2.88 71.27±1.59
CMI 68.64±1.56 65.31±2.58 67.70±1.97 66.45±1.78 68.49±1.56 73.48±1.84
CMI-KDE 70.39±2.22 66.94±3.51 69.72±2.76 68.24±2.49 70.23±2.22 74.12±1.60
From the above table, it can be seen that each part of the model proposed in the present invention brings significant improvement to the original model, and the model as a whole still maintains the highest performance. This illustrates the simplicity of the invention without redundant parts.
Currently, DL is widely used in many fields due to its strong feature extraction capability and simple end-to-end concept. Although it has better performance than some TML methods, its complex structure and mapping relationship leads to an unintelligibility problem of DL. The model provided by the invention also belongs to a DL method, but not only has more excellent performance compared with other DL methods, but also can give a causal explanation to the obtained result. To verify this, we sort the BFN features using the weight matrix W and divide it into ten equal parts. We then removed one of the 10 parts sequentially to observe the performance trend of the BrainNetCNN. The results are shown in FIG. 3, where 0 represents the removal of the first portion of features (the most important 10% of features), 1 represents the removal of the second portion of features, and 9 represents the removal of the tenth portion of features (the least important 10% of features). We can see that some indicators such as ACC and bcacc are in an absolute ascending trend as the importance of the removed features decreases. However, the fluctuation of Recall is relatively drastic. However, the evaluation index is only a one-sided evaluation index, and the more comprehensive evaluation index F1 score is approximately in an ascending trend. Therefore, we can conclude that the weight matrix W of the CMI-PC reveals the degree of importance of BrainNetCNN to the input features, and gives an explanation to the results of BrainNetCNN.
The experiment shows that compared with other DL methods, the model CMI-PC provided by the invention has more excellent performance, can give a causal explanation to the obtained result, relieves the black box problem of the DL method to a certain extent, and has great application prospect in the aspect of computer-aided diagnosis of brain diseases.

Claims (2)

1. A brain function network classification method based on condition mutual information and kernel density estimation is characterized in that: the method comprises the following steps:
step one), data acquisition: performing experiments on the ABIDE I data set to evaluate classification performance;
step two), selecting a region of interest ROI: adopting an AAL template, and taking the average value of all voxels in the ROI as a signal of the ROI;
step three), constructing a brain function network: the brain function network is constructed by a correlation-based method, and a Pearson correlation coefficient is adopted to construct the brain function network;
step four) estimating causal connection among ROIs: obtaining a skeleton of the causal graph by using a PC according to the concept of two-stage, and then determining the direction of the causal graph by using a CMI (CMI) capable of capturing the complex time dependence among input sequences;
step five), estimating causal strength between the functional connection and the label: estimating probability density distribution of the strength of the input functional connection under different labels by using KDE, and taking the probability density difference as estimation of causal strength between the functional connection and the labels;
and step six), fusing the causal knowledge of the step four and the step five with BFN to obtain a BFN with classification information, and inputting the BFN into BrainNetCNN for classification.
2. The brain function network classification method based on conditional mutual information and kernel density estimation as claimed in claim 1, wherein: in the second step), the AAL template has 116 brain areas in total, wherein 90 brain areas exist, and 26 cerebellar areas exist; only 90 brain regions in the AAL template were considered.
CN202111245109.8A 2021-10-26 2021-10-26 Brain function network classification method based on condition mutual information and nuclear density estimation Active CN114298126B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111245109.8A CN114298126B (en) 2021-10-26 2021-10-26 Brain function network classification method based on condition mutual information and nuclear density estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111245109.8A CN114298126B (en) 2021-10-26 2021-10-26 Brain function network classification method based on condition mutual information and nuclear density estimation

Publications (2)

Publication Number Publication Date
CN114298126A true CN114298126A (en) 2022-04-08
CN114298126B CN114298126B (en) 2024-05-31

Family

ID=80964144

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111245109.8A Active CN114298126B (en) 2021-10-26 2021-10-26 Brain function network classification method based on condition mutual information and nuclear density estimation

Country Status (1)

Country Link
CN (1) CN114298126B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117975041A (en) * 2024-03-29 2024-05-03 烟台大学 Brain network feature extraction method, system and equipment based on graph convolution neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140324752A1 (en) * 2013-03-15 2014-10-30 Alexander Statnikov Data Analysis Computer System and Method For Fast Discovery Of Multiple Markov Boundaries
CN112130668A (en) * 2020-09-27 2020-12-25 杭州电子科技大学 Inter-muscle coupling analysis method for mutual information of R rattan Copula
CN113040715A (en) * 2021-03-09 2021-06-29 北京工业大学 Human brain function network classification method based on convolutional neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140324752A1 (en) * 2013-03-15 2014-10-30 Alexander Statnikov Data Analysis Computer System and Method For Fast Discovery Of Multiple Markov Boundaries
CN112130668A (en) * 2020-09-27 2020-12-25 杭州电子科技大学 Inter-muscle coupling analysis method for mutual information of R rattan Copula
CN113040715A (en) * 2021-03-09 2021-06-29 北京工业大学 Human brain function network classification method based on convolutional neural network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117975041A (en) * 2024-03-29 2024-05-03 烟台大学 Brain network feature extraction method, system and equipment based on graph convolution neural network

Also Published As

Publication number Publication date
CN114298126B (en) 2024-05-31

Similar Documents

Publication Publication Date Title
Xing et al. Diagnosing deep learning models for high accuracy age estimation from a single image
Chen et al. Deep feature learning for medical image analysis with convolutional autoencoder neural network
Unnikrishnan et al. Toward objective evaluation of image segmentation algorithms
US7890445B2 (en) Model selection for cluster data analysis
CN111000553B (en) Intelligent classification method for electrocardiogram data based on voting ensemble learning
US8977579B2 (en) Latent factor dependency structure determination
CN109582782A (en) A kind of Text Clustering Method based on Weakly supervised deep learning
Afrin et al. Supervised machine learning based liver disease prediction approach with LASSO feature selection
CN103020643A (en) Classification method based on kernel feature extraction early prediction multivariate time series category
Jha et al. Alzheimer’s disease detection using sparse autoencoder, scale conjugate gradient and softmax output layer with fine tuning
Zhang et al. A rough set-based multiple criteria linear programming approach for the medical diagnosis and prognosis
Araújo et al. Self-organizing subspace clustering for high-dimensional and multi-view data
Zhan et al. Low-rank sparse feature selection for patient similarity learning
CN112634214A (en) Brain network classification method combining node attributes and multilevel topology
CN113947157A (en) Dynamic brain effect connection network generation method based on hierarchical clustering and structural equation model
CN114298126A (en) Brain function network classification method based on condition mutual information and kernel density estimation
Pai et al. A relative patterns discovery for enhancing outlier detection in categorical data
Li et al. A pneumonia detection method based on improved convolutional neural network
Liao et al. Noise-related face image recognition based on double dictionary transform learning
CN113786185A (en) Static brain network feature extraction method and system based on convolutional neural network
CN106204545A (en) A kind of based on region division and the medical science lesion image feature representation method of Fisher vector
Pellegretti et al. Supervised learning of descriptions for image recognition purposes
CN108256569A (en) A kind of object identifying method under complex background and the computer technology used
CN110766071B (en) Brain network data enhancement method based on forest self-encoder
Pang et al. Leveraging deep preference learning for indexing and retrieval of biomedical images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant