CN115222688B - Medical image classification method based on graph network time sequence - Google Patents

Medical image classification method based on graph network time sequence Download PDF

Info

Publication number
CN115222688B
CN115222688B CN202210814372.2A CN202210814372A CN115222688B CN 115222688 B CN115222688 B CN 115222688B CN 202210814372 A CN202210814372 A CN 202210814372A CN 115222688 B CN115222688 B CN 115222688B
Authority
CN
China
Prior art keywords
graph
neural network
convolution neural
layer
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210814372.2A
Other languages
Chinese (zh)
Other versions
CN115222688A (en
Inventor
潘丹
骆根强
张怡聪
容华斌
曾安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Polytechnic Normal University
Original Assignee
Guangdong Polytechnic Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Polytechnic Normal University filed Critical Guangdong Polytechnic Normal University
Priority to CN202210814372.2A priority Critical patent/CN115222688B/en
Publication of CN115222688A publication Critical patent/CN115222688A/en
Application granted granted Critical
Publication of CN115222688B publication Critical patent/CN115222688B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a medical image classification method based on a graph network time sequence, which comprises the following steps: acquiring an fMRI image sample; constructing a graph network time sequence capable of showing dynamic changes of functional connection among brain partitions based on a k-s verification method, and processing the fMRI image samples to obtain a graph network time sequence corresponding to each fMRI image sample; and constructing a graph convolution neural network-time domain convolution neural network model, training and verifying, and finally classifying the medical images by utilizing the graph convolution neural network-time domain convolution neural network model. The medical image classification method provided by the invention realizes the projection of the brain function network connection dynamic change rule; the method provides a graph convolution neural network-time domain convolution neural network model, is beneficial to the extraction of graph characteristics and the learning of change rules in a graph network time sequence, and effectively improves the classification capability of the model.

Description

Medical image classification method based on graph network time sequence
Technical Field
The invention relates to the technical field of computer analysis of medical images, in particular to a medical image classification method based on a graph network time sequence.
Background
With the development of modern medicine, medical images play an increasingly important role in the auxiliary diagnosis and treatment of diseases. A great deal of research shows that many neuropsychiatric diseases (such as AD and schizophrenia) are related to topological changes of brain structures and functional networks, and in recent years, human brain connectivity (Human Connectome) is proposed to mainly research dynamic complex functional networks formed by brain connections on a wide space-time scale so as to better understand the pathological basis of the neuropsychiatric diseases and further help to understand the working mechanism in the brain. Among them, functional Magnetic Resonance Imaging (fMRI) has both high temporal resolution and spatial resolution, and provides an important means for studying the functions of the human brain, and has become a research hotspot and difficulty of human brain connectivity omics. However, at the same time, fMRI images themselves are susceptible to noise interference and have high data dimensionality, which causes great difficulty in data processing and analysis. Aiming at the characteristics of the fMRI image, more valuable information can be mined by utilizing a deep learning method and a data driving mode, and the process of manually processing and analyzing data is simplified, so that the burden of doctors and researchers is reduced.
In the medical image classification method based on fMRI, a brain function network is constructed by using fMRI based on brain connectivity, and classification is performed according to a topological structure and various network parameters in the brain function network. However, this method utilizes the BOLD signal time sequence contained in an fMRI image to construct only a brain function network for an individual human brain, and does not utilize the associated information of the BOLD signal time sequence contained in an fMRI image in the spatial dimension to the maximum extent, so that the dynamic changes of the associated relationship between different brain areas in the neurophysiological process along with the time change cannot be reflected, and these changing trends may play a critical role in the classification of fMRI.
The prior art discloses a brain network classification method based on a atlas neural network. The method comprises the following steps: firstly, extracting BOLD signals of all brain areas from an fMRI image; secondly, constructing a brain graph capable of reflecting the topological structure characteristics of functional connection between brain areas; thirdly, inputting the constructed brain network and the actual diagnosis label into a graph volume neural network for feature learning and model training. According to the method, a brain network is constructed through fMRI images, feature learning and classification are carried out based on the brain network, important information hidden in the images may be ignored, and dynamic changes of correlation relationships among different brain areas along with time changes in the neurophysiological process cannot be reflected.
The prior art discloses a training method and apparatus, a computer device and a storage medium for constructing a network model based on fMRI. The method comprises the following steps: sampling and preprocessing original fMRI image data; establishing a 3D-CNN + LSTM model; creating an fMRI image segment as a first training data set, and using the fMRI segment with the minimum loss value in the first training data set as a second training data set; and training the 4D-CNN model by applying a second test data set and outputting a classification result. The two convolution neural models adopted by the method can extract time and space information in the fMRI image, but the two models have more parameters, the input fMRI image has high dimensionality, only a short time segment can be selected as the model input, and long-time dynamic change information in the fMRI image cannot be acquired.
Disclosure of Invention
The invention provides a medical image classification method based on a graph network time sequence, which can reflect the dynamic change rule of brain function network connection.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a medical image classification method based on a graph network time sequence comprises the following steps:
s1: collecting an original fMRI image, preprocessing and sampling to obtain an fMRI image sample;
s2: constructing a graph network time sequence capable of showing dynamic changes of functional connection among brain partitions based on a k-s (Kolmogorov-Smirnov) verification method, and processing the fMRI image samples to obtain a graph network time sequence corresponding to each fMRI image sample;
s3: constructing a graph convolution neural network-time domain convolution neural network model, and training and verifying the graph convolution neural network-time domain convolution neural network model by utilizing a graph network time sequence;
s4: and inputting the fMRI image to be classified into the graph convolution neural network-time domain convolution neural network model which completes training and verification, so as to realize classification of the medical image.
In the scheme, a graph network time sequence capable of showing the dynamic change of the functional connection between the brain partitions is constructed through the fMRI image, so that the showing of the dynamic change rule of the brain functional network connection is realized; meanwhile, a graph convolution neural network-time domain convolution neural network model is provided, so that the extraction of graph characteristics and the learning of change rules in a graph network time sequence are facilitated, and the classification capability of the model is effectively improved.
In step S1, the raw fMRI image is preprocessed by DPARSF software.
In the image acquisition process, factors such as head movement, respiration, heartbeat and the like of a testee can generate noise, so that the imaging quality of an image is deteriorated, therefore, in the data analysis process, preprocessing is carried out firstly, the influence of irrelevant noise is reduced, the signal to noise ratio is improved, and the preprocessing process is realized by using DPARSF software.
In step S1, the process of sampling the preprocessed fMRI image specifically includes: assuming a sampling time slice length of k, the calculation selects a start frame of
Figure 546219DEST_PATH_IMAGE001
Finally, sampling to obtain a sample segment of
Figure 523402DEST_PATH_IMAGE002
And repeating the steps to obtain a plurality of sample fragments to form the fMRI image sample.
Generally speaking, a large number of fMRI image samples are needed for training a deep learning model from scratch, and for fMRI images, it is often difficult to obtain a large number of fMRI image samples for model training; therefore, the scheme provides a method for increasing the number of samples by segmenting the fMRI image samples into shorter segments, so that the fMRI image sample data is enhanced, the number of training samples is greatly increased, and the training effect of the model is improved.
In step S2, the fMRI image sample is composed of a plurality of time slices of each fMRI image, and for each time slice in one fMRI image, the process of obtaining the graph network time sequence corresponding to the fMRI image sample specifically includes:
s21: for a time slice, dividing a human brain into a plurality of interested areas according to a brain area division template; taking each interested area as a vertex to obtain a vertex set;
s22: taking the correlation among the vertexes of the vertex set as an edge, and checking the correlation between the vertexes as the strength of the edge based on a k-s verification method to obtain an edge set;
s23: constructing an undirected graph of the time slice according to the vertex set and the edge set;
s24: and reselecting a time slice, repeatedly executing the steps S21-S24 to obtain an undirected graph of each time slice in the fMRI image, and obtaining the graph network time sequence corresponding to the fMRI image sample according to all the undirected graphs.
Wherein, in the step S2, the vertex set is expressed as
Figure 795115DEST_PATH_IMAGE003
In which
Figure 386502DEST_PATH_IMAGE004
Represents the first
Figure 501089DEST_PATH_IMAGE005
The region of interest is determined by the area of interest,
Figure 157329DEST_PATH_IMAGE006
is the number of regions of interest; edge set adjacency matrix
Figure 408182DEST_PATH_IMAGE007
It is shown that, among others,Nthe number of vertices is represented as a function of,
Figure 872268DEST_PATH_IMAGE008
is a vertex
Figure 474150DEST_PATH_IMAGE009
The strength of the middle edge; in particular, according to the region of interest
Figure 730819DEST_PATH_IMAGE005
And the region of interest
Figure 960813DEST_PATH_IMAGE010
Obtained by verifying the k-s verification method of the BOLD signalp-valueValue as vertex
Figure 972631DEST_PATH_IMAGE009
The intensity of the edge between, k-s, verification method can be used to verify whether the data in the two regions of interest obey the same distribution ifp-valueThe smaller the value, the smaller the correlation between the two regions of interest; the above-mentionedp-valueThe calculation process of the value is specifically as follows:
setting region of interest
Figure 468334DEST_PATH_IMAGE005
Has a BOLD signal of
Figure 482689DEST_PATH_IMAGE011
Region of interest
Figure 708134DEST_PATH_IMAGE010
Has a BOLD signal of
Figure 500641DEST_PATH_IMAGE012
Wherein
Figure 998487DEST_PATH_IMAGE013
Are respectively the region of interest
Figure 924855DEST_PATH_IMAGE005
And the region of interest
Figure 880172DEST_PATH_IMAGE010
The total number of the BOLD signals of the two interested areas is
Figure 233793DEST_PATH_IMAGE014
(ii) a The region of interest
Figure 655154DEST_PATH_IMAGE005
The BOLD signals are sorted from small to large and renumbered
Figure 791737DEST_PATH_IMAGE015
The sorted BOLD signals:
Figure 726195DEST_PATH_IMAGE016
obtaining non-descending order interested region
Figure 640930DEST_PATH_IMAGE017
BOLD signal of (a):
Figure 332943DEST_PATH_IMAGE018
is provided with
Figure 866692DEST_PATH_IMAGE019
Is a region of interest
Figure 281755DEST_PATH_IMAGE005
Empirical distribution function of (2):
Figure 649283DEST_PATH_IMAGE020
wherein the content of the first and second substances,
Figure 422067DEST_PATH_IMAGE021
is a region of interest
Figure 352983DEST_PATH_IMAGE005
In (C) is less than or equal to
Figure 262033DEST_PATH_IMAGE022
The number of BOLD signals of (a); obtaining the region of interest by the same method
Figure 66041DEST_PATH_IMAGE023
Empirical distribution function of
Figure 441569DEST_PATH_IMAGE024
Figure 317121DEST_PATH_IMAGE025
Wherein the content of the first and second substances,
Figure 18361DEST_PATH_IMAGE026
is a region of interest
Figure 180220DEST_PATH_IMAGE027
In (C) is less than or equal to
Figure 662017DEST_PATH_IMAGE028
The number of BOLD signals of (a);
computing verification statistics for a k-s verification method
Figure 747785DEST_PATH_IMAGE029
Figure 257526DEST_PATH_IMAGE030
Figure 200074DEST_PATH_IMAGE031
Wherein the content of the first and second substances,
Figure 44534DEST_PATH_IMAGE032
is a region of interest
Figure 183260DEST_PATH_IMAGE005
Empirical distribution of BOLD signals
Figure 655829DEST_PATH_IMAGE033
Of interest
Figure 175804DEST_PATH_IMAGE027
Empirical distribution of BOLD signals
Figure 52099DEST_PATH_IMAGE034
The maximum value of the absolute value of the difference, and finally, the region of interest is calculated
Figure 541986DEST_PATH_IMAGE005
And a region of interest
Figure 72325DEST_PATH_IMAGE027
K-s verification of BOLD signalsp-value value
Figure 12468DEST_PATH_IMAGE035
Figure 424995DEST_PATH_IMAGE036
Where Z is the validation statistic and e is a natural constant.
Wherein, the step S3 specifically includes the following steps:
s31: respectively constructing a graph convolution neural network and a time domain convolution neural network, and forming the graph convolution neural network and the time domain convolution neural network into a graph convolution neural network-time domain convolution neural network model;
s32: taking one part of the graph network time sequence as a training set, and taking the rest part as a verification set;
s33: training a graph convolution neural network-time domain convolution neural network model by using a training set;
s34: in the training process, the graph convolution neural network-time domain convolution neural network model is verified through a verification set, and the parameters with the highest accuracy in the verification set are used as the parameters of the graph convolution neural network-time domain convolution neural network model to complete the training of the graph convolution neural network-time domain convolution neural network model;
in the training process, the graph characteristics of the graph network time sequence are extracted by the constructed graph convolution neural network, and the graph characteristics are input into the time domain convolution neural network to obtain a classification result.
In the step S2, extracting an average value and a standard deviation of BOLD signals of the region of interest as features of a vertex of the BOLD signals to obtain a vertex attribute matrix; in the step S3, the graph convolution neural network comprises a plurality of convolution pooling units, a full connection layer and a softmax classifier; the convolution pooling unit comprises a graph convolution layer, a self-attention graph pooling layer and a readout layer; graph network time sequence including vertex attribute matrix for setting graph convolution neural network input
Figure 656256DEST_PATH_IMAGE037
And adjacency matrix
Figure 995096DEST_PATH_IMAGE038
Wherein, in the process,
Figure 981506DEST_PATH_IMAGE039
the number of the vertices is the number of the vertices,
Figure 22275DEST_PATH_IMAGE040
is the number of vertex attributes; the operation of the graph convolution layer is specifically as follows:
Figure 40915DEST_PATH_IMAGE041
wherein, the first and the second end of the pipe are connected with each other,
Figure 608163DEST_PATH_IMAGE042
is that
Figure 375261DEST_PATH_IMAGE043
An order identity matrix;
Figure 27960DEST_PATH_IMAGE044
is a diagonal matrix, representing the degrees of each vertex,
Figure 348826DEST_PATH_IMAGE045
Figure 911526DEST_PATH_IMAGE046
representative matrix
Figure 239739DEST_PATH_IMAGE047
The elements of row i and column j,
Figure 238788DEST_PATH_IMAGE048
representative matrix
Figure 677859DEST_PATH_IMAGE049
The element of the ith row and ith column,
Figure 891803DEST_PATH_IMAGE050
is the first
Figure 17016DEST_PATH_IMAGE051
Node embedding of layer if the node of layer 0 is characterized by
Figure 644306DEST_PATH_IMAGE052
Then, then
Figure 496856DEST_PATH_IMAGE053
Figure 158781DEST_PATH_IMAGE054
Is a learnable weight parameter;
the self-attention-seeking pooling layer needs to obtain the degree of importance of each layer of nodes, called self-attention of the nodes, and then before ranking the attention score weightskAre reserved to formTop-KA node; first calculate the self-attention score
Figure 750169DEST_PATH_IMAGE055
Wherein N is the number of nodes:
Figure 474542DEST_PATH_IMAGE056
wherein
Figure 520996DEST_PATH_IMAGE057
Is a learnable self-attention weight; selecting in a node selection mode according to the self-attention scoresTop-KThe node, which retains a part of the input graph network time sequence, specifically is:
Figure 395017DEST_PATH_IMAGE058
wherein, the first and the second end of the pipe are connected with each other,
Figure 908038DEST_PATH_IMAGE059
an index representing a reservation node;
Figure 244342DEST_PATH_IMAGE060
presentation selection
Figure 953541DEST_PATH_IMAGE061
Before ranking
Figure 58900DEST_PATH_IMAGE062
A node of (2); pooling ratio
Figure 946084DEST_PATH_IMAGE063
Indicates to be reservedPercentage of node number, before obtaining self-attention value
Figure 458099DEST_PATH_IMAGE064
Large node index, then Masking operation is performed:
Figure 580776DEST_PATH_IMAGE065
wherein the content of the first and second substances,
Figure 947167DEST_PATH_IMAGE066
indicating node embedding with reserved index mask,
Figure 864307DEST_PATH_IMAGE067
indicating the attention score corresponding to the retention node,
Figure 299836DEST_PATH_IMAGE068
which means that the multiplication is performed in bits,
Figure 898308DEST_PATH_IMAGE069
an adjacency matrix representing the reserved nodes is shown,
Figure 243839DEST_PATH_IMAGE070
Figure 220629DEST_PATH_IMAGE071
a node embedding and adjacency matrix representing outputs from the attention pooling layer;
the readout layer aggregates the node features to form a representation of a fixed size to obtain a high-dimensional representation of the graph, and the output of the readout layer is specifically characterized by:
Figure 753241DEST_PATH_IMAGE072
wherein, the first and the second end of the pipe are connected with each other,Nthe number of the nodes is represented as,
Figure 358666DEST_PATH_IMAGE073
denotes the l th layeriEmbedding nodes of each node, | | represents splicing operation of the features, and the read-out layer is actually a global average pooling layer and a global maximum pooling layer to obtain splicing of the features;
in order to realize the reconstruction output of the data, the forward propagation process of the full connection layer is as follows:
Figure 480075DEST_PATH_IMAGE074
Figure 4597DEST_PATH_IMAGE075
Figure 899872DEST_PATH_IMAGE076
are respectively the first
Figure 433621DEST_PATH_IMAGE077
The learnable weight matrix and the learnable bias for the fully connected one of the layers,
Figure 114263DEST_PATH_IMAGE078
and
Figure 481791DEST_PATH_IMAGE079
respectively representing the number of neurons of the l-th layer of full-connection layer and the number of neurons of the l + 1-th layer of full-connection layer, and finally obtaining a final classification result through a softmax classifier:
Figure 254575DEST_PATH_IMAGE080
wherein, the first and the second end of the pipe are connected with each other,
Figure 185491DEST_PATH_IMAGE081
Figure 94541DEST_PATH_IMAGE082
is the number of neurons of the l-th fully-connected layer,
Figure 898549DEST_PATH_IMAGE083
is the number of categories; the graph convolution neural network obtains graphs obtained by a plurality of self-attention graph pooling layers, high-dimensional feature representations of different hierarchical graphs are obtained through a reading layer, the high-dimensional features of the different hierarchical graphs are added to obtain a final high-dimensional feature representation, the high-dimensional features are reconstructed through a full connection layer, the reconstructed features are used as the input of the time domain convolution neural network, and finally the classification result of the input graph is obtained through a softmax classifier.
In the step S3, an input layer of the time-domain convolutional neural network is connected to a full-connection layer of the graph convolutional neural network, processed by a plurality of TCN layers, and output by an output layer to the softmax classifier after being processed by an expansion layer, where each TCN layer converts the input dimension size into a dimension consistent with the output dimension size through a one-dimensional full-convolution structure, and the forward propagation process is as follows:
Figure 516218DEST_PATH_IMAGE084
sequence data is formed by splicing output vectors of assumed full connection layers
Figure 126191DEST_PATH_IMAGE085
Wherein
Figure 765114DEST_PATH_IMAGE086
Is the length of the time slice and is,
Figure 989291DEST_PATH_IMAGE087
the number of neurons in the full junction layer; will be provided with
Figure 471088DEST_PATH_IMAGE088
Inputting the time slice into a TCN layer, outputting and expanding the time slice into a one-dimensional vector through an expansion layer after passing through a plurality of TCN layers, and finally classifying the time slice through a softmax classifier to obtain a classification result of the time slice.
Wherein, in the step S3, the TCN layer of the time-domain convolutional neural network is composed of a causal convolution and an expansion convolution, wherein:
in causal convolution, the element of the output sequence depends only on the element preceding it in the input sequence, which is one time earlier in the sequence for time series data
Figure 494539DEST_PATH_IMAGE089
Depends only on the next layer
Figure 378181DEST_PATH_IMAGE089
The values at and before time, namely:
Figure 274724DEST_PATH_IMAGE090
wherein, the first and the second end of the pipe are connected with each other,
Figure 119183DEST_PATH_IMAGE091
the output representing the time T of the causal convolution,
Figure 70959DEST_PATH_IMAGE092
a feature vector representing layer i time 1 to time T; the expansion convolution refers to performing convolution operation by using a discontinuous neuron with the same size as a convolution kernel; the expansion convolution has a expansion coefficientdThe method is used for controlling the discontinuity degree of neurons participating in convolution operation, and the calculation formula of the dilation convolution is as follows:
Figure 464900DEST_PATH_IMAGE093
wherein, the first and the second end of the pipe are connected with each other,ethe coefficient of expansion is expressed in terms of,
Figure 984874DEST_PATH_IMAGE094
which represents the size of the convolution kernel,
Figure 861170DEST_PATH_IMAGE095
weight of the i-th term of the convolution kernel wheneAt 1, the dilated convolution degenerates to the normal convolution, controlled byeSo as to enlarge the receptive field under the premise of unchanged calculated amount.
In the graph convolution neural network-time domain convolution neural network model constructed in the step S3, the loss function is composed of three parts, which are node classification loss, time segment classification loss, and final classification loss, and the loss function is specifically expressed as:
Figure 616636DEST_PATH_IMAGE096
wherein, the first and the second end of the pipe are connected with each other,
Figure 84658DEST_PATH_IMAGE097
is the node classification loss of the jth node at the ith time point,
Figure 634588DEST_PATH_IMAGE098
Figure 499645DEST_PATH_IMAGE099
in the scheme, a self-attention pooling layer is applied in the graph convolution neural network, so that only one graph is finally reserved for each graphTop-KAnd the loss function is also only calculatedTop-KClassification loss of nodes;
Figure 934168DEST_PATH_IMAGE100
is the firstiThe time slice classification of a time point is lost,
Figure 381330DEST_PATH_IMAGE098
Figure 993839DEST_PATH_IMAGE101
the number of time points is the classification loss of the graph convolution neural network;
Figure 159241DEST_PATH_IMAGE102
a final classification loss, hyperparameter, for a time-domain convolutional neural network
Figure 928614DEST_PATH_IMAGE103
Are respectively provided withTo control the effects of node classification loss, time segment classification loss, and ultimately classification loss, there are
Figure 886075DEST_PATH_IMAGE104
And is provided with
Figure 777807DEST_PATH_IMAGE105
(ii) a All classification loss functions use a cross-entropy loss function, which is specifically expressed as:
Figure 305872DEST_PATH_IMAGE106
Figure 206832DEST_PATH_IMAGE107
represents the sample numberjThe true probability value of the seed class,
Figure 314072DEST_PATH_IMAGE108
representing the sample number obtained from the modeljPredicted probability values for the species classes.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the invention provides a medical image classification method based on a graph network time sequence, which constructs the graph network time sequence capable of showing the dynamic change of functional connection between brain partitions through fMRI images, and realizes the showing of the dynamic change rule of the brain functional network connection; meanwhile, a graph convolution neural network-time domain convolution neural network model is provided, so that extraction of graph features and learning of change rules in a graph network time sequence are facilitated, and classification capability of the model is effectively improved.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a detailed schematic diagram of the graph convolutional neural network-time domain convolutional neural network model according to the present invention;
FIG. 3 is a schematic diagram of a convolutional neural network according to the present invention;
fig. 4 is a specific schematic diagram of the time domain convolutional neural network according to the present invention.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the present embodiments, certain elements of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
As shown in fig. 1, a medical image classification method based on graph network time series includes the following steps:
s1: collecting an original fMRI image, preprocessing and sampling to obtain an fMRI image sample;
s2: constructing a graph network time sequence capable of showing dynamic changes of functional connection among brain partitions based on a k-s verification method, and processing the fMRI image samples to obtain a graph network time sequence corresponding to each fMRI image sample;
s3: constructing a graph convolution neural network-time domain convolution neural network model, and training and verifying the graph convolution neural network-time domain convolution neural network model by utilizing a graph network time sequence;
s4: and inputting the fMRI images to be classified into the graph convolution neural network-time domain convolution neural network model which completes training and verification, so as to realize the classification of the medical images.
In the specific implementation process, a graph network time sequence capable of showing the dynamic change of the functional connection between brain partitions is constructed through fMRI images, so that the showing of the dynamic change rule of the brain functional network connection is realized; meanwhile, a graph convolution neural network-time domain convolution neural network model is provided, so that the extraction of graph characteristics and the learning of change rules in a graph network time sequence are facilitated, and the classification capability of the model is effectively improved.
More specifically, in step S1, the raw fMRI image is preprocessed by DPARSF software.
In the image acquisition process, the testee moves, factors such as breathing, heartbeat, can produce the noise, leads to the imaging quality variation of image, consequently when data analysis, carries out the preliminary treatment earlier, reduces the influence of irrelevant noise, improves the SNR, and this scheme uses DPARSF software to realize the preliminary treatment process, specifically includes:
first removing the first 10 frames of data of each fMRI image sample to obtain a stable signal; second, each slice is time-corrected to ensure that the data on each slice corresponds to the same point in time. After temporal correction, the spatial correction is continued, and each subject's image frame is realigned with its average image and normalized spatially to MNI (Montreal Neurological Institute) space, thereby eliminating the differences between individuals; all images were spatially smoothed using a 4 × 4mm 3 full-width half-height gaussian kernel; linear trend removal and low frequency filtering (0.01 Hz-0.08 Hz); covariate regression analysis, the interference factors eliminated include cerebrospinal fluid, white matter signals, and head movements.
More specifically, in step S1, the process of sampling the preprocessed fMRI image specifically includes: assuming a sampling time slice length of k, the calculation selects a start frame of
Figure 517651DEST_PATH_IMAGE109
Finally, sampling to obtain a sample segment of
Figure 392066DEST_PATH_IMAGE110
And repeating the steps to obtain a plurality of sample fragments to form the fMRI image sample.
Generally speaking, a large number of fMRI image samples are needed for training a deep learning model from scratch, and for fMRI images, it is often difficult to obtain a large number of fMRI image samples for model training; therefore, the scheme provides a method for increasing the number of samples by dividing the fMRI image samples into shorter segments, so that the fMRI image sample data is enhanced, the number of training samples is greatly increased, and the training effect of the model is improved.
More specifically, in step S2, the fMRI image samples are composed of a plurality of time slices of each fMRI image, and for each time slice in one fMRI image, the process of obtaining the graph network time series corresponding to the fMRI image sample specifically includes:
s21: for a time slice, dividing a human brain into a plurality of interested areas according to a brain area division template; taking each interested region as a vertex to obtain a vertex set
Figure 955771DEST_PATH_IMAGE111
S22: taking the correlation among vertexes of the vertex set as edges, and checking the correlation between the vertexes as the intensity of the edges based on a k-s verification method to obtain the edge set
Figure 497611DEST_PATH_IMAGE112
S23: constructing the undirected graph of the time slice according to the vertex set and the edge set
Figure 668830DEST_PATH_IMAGE113
S24: reselecting a time slice, repeatedly executing the steps S21-S24 to obtain an undirected graph of each time slice in the fMRI image, and obtaining a graph network time sequence corresponding to the fMRI image sample according to all the undirected graphs
Figure 922219DEST_PATH_IMAGE114
. Wherein the content of the first and second substances,
Figure 899402DEST_PATH_IMAGE115
is the number of fMRI time points,
Figure 171114DEST_PATH_IMAGE116
represents the first
Figure 841130DEST_PATH_IMAGE117
A map constructed from time slices.
In a specific implementation, for each time slice in each fMRI image sample, the human brain is divided into N regions of interest according to a brain region division template, such as an AAL template, a Brainnetome template, and the like. In the scheme, an AAL template is adopted for division, the template divides a human brain into 116 interested areas, wherein 90 interested areas are brain areas, only 90 interested areas of the brain areas are selected in the scheme, and each interested area is used as a vertex to obtain a vertex set.
More specifically, in the step S2, the vertex set is expressed as
Figure 877088DEST_PATH_IMAGE118
Wherein
Figure 798908DEST_PATH_IMAGE119
Represents the first
Figure 784181DEST_PATH_IMAGE120
A region of interest (ROI) is formed,
Figure 271705DEST_PATH_IMAGE121
is the number of regions of interest; edge set by adjacency matrix
Figure 545691DEST_PATH_IMAGE122
It is shown that, among others,Nthe number of vertices is represented as a function of,
Figure 864677DEST_PATH_IMAGE123
is a vertex
Figure 360249DEST_PATH_IMAGE124
The strength of the middle edge; in particular, according to the region of interest
Figure 44172DEST_PATH_IMAGE125
And the region of interest
Figure 133350DEST_PATH_IMAGE126
Obtained by verifying the k-s verification method of the BOLD signalp-valueValue as vertex
Figure 882126DEST_PATH_IMAGE124
The intensity of the edge between, k-s, verification method can be used to verify whether the data in the two regions of interest obey the same distribution ifp- valueThe smaller the value, the smaller the correlation between the two regions of interest; the above-mentionedp-valueThe calculation process of the value is specifically as follows:
setting region of interest
Figure 841992DEST_PATH_IMAGE125
Has a BOLD signal of
Figure 962394DEST_PATH_IMAGE127
Region of interest
Figure 397924DEST_PATH_IMAGE126
Has a BOLD signal of
Figure 58712DEST_PATH_IMAGE128
Wherein
Figure 279609DEST_PATH_IMAGE129
Are respectively the region of interest
Figure 53137DEST_PATH_IMAGE125
And the region of interest
Figure 585749DEST_PATH_IMAGE130
The total number of BOLD signals of the two regions of interest is
Figure 191174DEST_PATH_IMAGE131
(ii) a The region of interest
Figure 391211DEST_PATH_IMAGE125
The BOLD signals are sorted from small to large and are renumbered
Figure 837105DEST_PATH_IMAGE133
The sorted BOLD signals:
Figure 732380DEST_PATH_IMAGE016
obtaining non-descending order of interest region
Figure 550DEST_PATH_IMAGE134
BOLD signal of (a):
Figure 477930DEST_PATH_IMAGE135
is provided with
Figure 48720DEST_PATH_IMAGE136
Is a region of interest
Figure 821504DEST_PATH_IMAGE137
Empirical distribution function of (2):
Figure 549157DEST_PATH_IMAGE138
wherein the content of the first and second substances,
Figure 599153DEST_PATH_IMAGE139
is a region of interest
Figure 465478DEST_PATH_IMAGE140
Is less than or equal to
Figure 83147DEST_PATH_IMAGE141
The number of BOLD signals of (a); obtaining the region of interest by the same method
Figure 365224DEST_PATH_IMAGE142
Empirical distribution function of
Figure 394360DEST_PATH_IMAGE143
Figure 556220DEST_PATH_IMAGE144
Wherein the content of the first and second substances,
Figure 38017DEST_PATH_IMAGE145
is a region of interest
Figure 123785DEST_PATH_IMAGE146
Is less than or equal to
Figure 633526DEST_PATH_IMAGE147
The number of BOLD signals;
computing verification statistics for a k-s verification method
Figure 841653DEST_PATH_IMAGE148
Figure 686112DEST_PATH_IMAGE149
Figure 559259DEST_PATH_IMAGE150
Wherein the content of the first and second substances,
Figure 562987DEST_PATH_IMAGE151
is a region of interest
Figure 817382DEST_PATH_IMAGE152
Empirical distribution of BOLD signals
Figure 273771DEST_PATH_IMAGE153
Of interest region
Figure 449144DEST_PATH_IMAGE154
Empirical distribution of BOLD signals
Figure 651587DEST_PATH_IMAGE155
The maximum value of the absolute value of the difference, and finally, the region of interest is calculated
Figure 467096DEST_PATH_IMAGE156
And a region of interest
Figure 332153DEST_PATH_IMAGE154
K-s verification of BOLD signalp-value value
Figure 766676DEST_PATH_IMAGE157
Figure 213838DEST_PATH_IMAGE036
Where Z is the validation statistic and e is a natural constant.
Example 2
More specifically, on the basis of embodiment 1, a graph convolution neural network-time domain convolution neural network model is constructed in step S3, and in the model building process, the design mainly focuses on how to fuse the spatial dimension information and the time dimension information. In the embodiment, firstly, the graph features are extracted by using the graph convolution network, and the graph features are input into the time domain convolution neural network, so that the final classification result is obtained. The step S3 specifically includes the following steps:
s31: respectively constructing a graph convolution neural network and a time domain convolution neural network, and forming the graph convolution neural network and the time domain convolution neural network into a graph convolution neural network-time domain convolution neural network model;
s32: taking one part of the graph network time sequence as a training set, and taking the rest part as a verification set;
s33: training the graph convolution neural network-time domain convolution neural network model by using a training set;
s34: in the training process, the graph convolution neural network-time domain convolution neural network model is verified through a verification set, and the parameters with the highest accuracy in the verification set are used as the parameters of the graph convolution neural network-time domain convolution neural network model to complete the training of the graph convolution neural network-time domain convolution neural network model;
in the training process, the graph characteristics of the graph network time sequence are extracted by the constructed graph convolution neural network, and the graph characteristics are input into the time domain convolution neural network to obtain a classification result.
More specifically, in step S2, extracting the average value and the standard deviation of the BOLD signal of the region of interest as the features of the vertex thereof, to obtain a vertex attribute matrix; in the step S3, the graph convolution neural network structure designed in this embodiment includes a plurality of convolution pooling units, a full connection layer and a softmax classifier as shown in fig. 3; the convolution pooling unit comprises a graph convolution layer, a self-attention graph pooling layer and a readout layer; setting graph network time sequence of graph convolution neural network input to contain vertex attribute matrix
Figure 357506DEST_PATH_IMAGE158
And adjacency matrix
Figure 663853DEST_PATH_IMAGE159
Wherein, in the step (A),
Figure 495543DEST_PATH_IMAGE160
is the number of the vertices,
Figure 453004DEST_PATH_IMAGE161
is the number of vertex attributes; the operation of the graph convolution layer is specifically as follows:
Figure 344736DEST_PATH_IMAGE162
wherein the content of the first and second substances,
Figure 935118DEST_PATH_IMAGE163
is that
Figure 193667DEST_PATH_IMAGE160
An order identity matrix;
Figure 615421DEST_PATH_IMAGE044
is a diagonal matrix, representing the degrees of each vertex,
Figure 615738DEST_PATH_IMAGE164
Figure 880366DEST_PATH_IMAGE165
representative matrix
Figure 319438DEST_PATH_IMAGE166
The elements of row i and column j,
Figure 736644DEST_PATH_IMAGE167
representative matrix
Figure 658595DEST_PATH_IMAGE168
The element of the ith row and ith column,
Figure 285885DEST_PATH_IMAGE169
is the first
Figure 138435DEST_PATH_IMAGE170
Node embedding of layer if the node of layer 0 is characterized by
Figure 534781DEST_PATH_IMAGE171
Then, then
Figure 126168DEST_PATH_IMAGE172
Figure 850542DEST_PATH_IMAGE173
Is a learnable weight parameter;
the self-attention-seeking pooling layer needs to obtain the degree of importance of each layer of nodes, called self-attention of the nodes, and then before ranking the attention score weightskIs reserved to formTop-KA node; first calculate the self-attention score
Figure 631416DEST_PATH_IMAGE174
Wherein N is the number of nodes:
Figure 567754DEST_PATH_IMAGE175
wherein
Figure 284038DEST_PATH_IMAGE057
Is a learnable self-attention weight; the above equation is very similar to the operation of the graph convolution layer, except that the graph convolution layer obtains the node embedding of the next layer, and the above equation obtains the self-attention score of the node in the layer, and the node selection mode is adopted to select the node according to the self-attention scoreTop-KThe node, which retains a part of the input graph network time sequence, specifically is:
Figure 885920DEST_PATH_IMAGE176
wherein the content of the first and second substances,
Figure 329540DEST_PATH_IMAGE177
an index representing a reservation node;
Figure 372582DEST_PATH_IMAGE178
presentation selection
Figure 384401DEST_PATH_IMAGE179
Before ranking
Figure 568520DEST_PATH_IMAGE062
A node of (2); pooling ratio
Figure 956776DEST_PATH_IMAGE180
Representing the percentage of nodes to be retained, before deriving the self-attention value
Figure 854325DEST_PATH_IMAGE181
Large node index, then Masking operation is performed:
Figure 161678DEST_PATH_IMAGE182
wherein the content of the first and second substances,
Figure 738153DEST_PATH_IMAGE183
indicating node embedding with reserved index mask,
Figure 274307DEST_PATH_IMAGE184
indicating the attention score corresponding to the retention node,
Figure 774166DEST_PATH_IMAGE185
which means that the multiplication is performed in bits,
Figure 127787DEST_PATH_IMAGE186
an adjacency matrix representing the reserved nodes is shown,
Figure 801344DEST_PATH_IMAGE187
Figure 452775DEST_PATH_IMAGE188
a node embedding and adjacency matrix representing outputs from the attention pooling layer;
the readout layer aggregates the node features to form a fixed-size representation, resulting in a high-dimensional representation of the graph, and the readout layer outputs are specifically characterized by:
Figure 387232DEST_PATH_IMAGE189
wherein the content of the first and second substances,Nthe number of the nodes is represented as,
Figure 787121DEST_PATH_IMAGE190
denotes the l th layeriEmbedding nodes of each node, | | represents splicing operation of the features, and the read-out layer is actually a global average pooling layer and a global maximum pooling layer to obtain splicing of the features;
in order to realize the reconstruction output of data, the forward propagation process of the full connection layer is as follows:
Figure 72609DEST_PATH_IMAGE191
Figure 763615DEST_PATH_IMAGE192
Figure 693525DEST_PATH_IMAGE193
are respectively the first
Figure 388949DEST_PATH_IMAGE194
The learnable weight matrix and the learnable bias for the fully connected one of the layers,
Figure 817525DEST_PATH_IMAGE195
and
Figure 764752DEST_PATH_IMAGE079
respectively represent
Figure 939382DEST_PATH_IMAGE194
And finally, obtaining a final classification result through a softmax classifier by the number of neurons of the layer full-junction layer and the number of neurons of the l +1 th layer full-junction layer:
Figure 163296DEST_PATH_IMAGE196
wherein the content of the first and second substances,
Figure 423376DEST_PATH_IMAGE197
Figure 705453DEST_PATH_IMAGE198
is the first
Figure 593644DEST_PATH_IMAGE194
The number of the neurons of the layer full-connection layer,
Figure 630870DEST_PATH_IMAGE199
is the number of categories; the graph convolution neural network obtains a plurality of graphs obtained from the attention map pooling layer, obtains high-dimensional feature representations of different hierarchical graphs through the reading layer, adds the high-dimensional features to obtain a final high-dimensional feature representationAnd reconstructing the high-dimensional features through the full-connection layer, taking the reconstructed features as the input of the time domain convolutional neural network, and finally obtaining the classification result of the input graph through a softmax classifier.
More specifically, in the step S3, the time-domain convolutional neural network structure in this embodiment is as shown in fig. 4, an input layer of the time-domain convolutional neural network structure is connected to a fully connected layer of the graph convolutional neural network, and the time-domain convolutional (TCN) layer is processed by a plurality of time-domain convolutional (TCN) layers, and output by an output layer to a softmax classifier after being processed by an expansion layer, where each TCN layer transforms the dimension of its input to be consistent with the dimension of its output by a one-dimensional fully convolutional structure, and its forward propagation process is as follows:
Figure 50350DEST_PATH_IMAGE200
sequence data is formed by splicing output vectors of assumed full connection layers
Figure 90112DEST_PATH_IMAGE201
Wherein
Figure 708175DEST_PATH_IMAGE202
Is the length of the time slice or slices,
Figure 57248DEST_PATH_IMAGE203
the number of neurons in the full junction layer; will be provided with
Figure 947713DEST_PATH_IMAGE204
And inputting the data into a TCN layer, outputting and expanding the data into a one-dimensional vector through an expansion layer after passing through a plurality of TCN layers, and finally classifying the data through a softmax classifier to obtain a classification result of the time slice. In order to reduce the number of parameters of the model, the graph convolution neural network in the model of the present embodiment adopts a design of sharing weights.
More specifically, in the step S3, the TCN layer of the time-domain convolutional neural network is composed of a causal convolution and an expansion convolution, wherein:
in causal convolution, the elements of the output sequence depend only on the inputThe elements before the elements in the sequence can not see future data, and the method is a strict sequence constraint model; one time of the previous layer for the time series data
Figure 899488DEST_PATH_IMAGE205
Is dependent only on the next layer
Figure 513003DEST_PATH_IMAGE205
The values at and before time, namely:
Figure 892032DEST_PATH_IMAGE206
wherein the content of the first and second substances,
Figure 791765DEST_PATH_IMAGE091
the output at time T of the causal convolution is shown,
Figure 422598DEST_PATH_IMAGE207
a feature vector representing the layer l from time 1 to time T; the expansion convolution refers to performing convolution operation by using a discontinuous neuron with the same size as a convolution kernel; the expansion convolution has a expansion coefficientdThe method is used for controlling the discontinuity degree of neurons participating in convolution operation, and the calculation formula of the dilation convolution is as follows:
Figure 15253DEST_PATH_IMAGE208
wherein, the first and the second end of the pipe are connected with each other,dthe coefficient of expansion is expressed in terms of,
Figure 955396DEST_PATH_IMAGE209
which represents the size of the convolution kernel,
Figure 571185DEST_PATH_IMAGE210
weight of the i-th term of the convolution kernel wheneAt 1, the dilated convolution degenerates to the normal convolution, controlled byeSo as to enlarge the receptive field under the premise of unchanged calculated amount.
More specifically, in the graph convolution neural network-time domain convolution neural network model constructed in step S3, the loss function is composed of three parts, which are respectively node classification loss, time segment classification loss, and final classification loss, and the loss function is specifically expressed as:
Figure 864763DEST_PATH_IMAGE211
wherein the content of the first and second substances,
Figure 938024DEST_PATH_IMAGE212
is the node classification loss of the jth node at the ith time point,
Figure 924435DEST_PATH_IMAGE213
Figure 27520DEST_PATH_IMAGE214
in this scheme, a self-attention pooling layer is applied in the graph convolution neural network, so that only the final for each graph is retainedTop-KAnd the loss function is also only calculatedTop-KClassification loss of nodes;
Figure 983843DEST_PATH_IMAGE215
is the firstiThe time slice classification of a time point is lost,
Figure 551091DEST_PATH_IMAGE216
Figure 583769DEST_PATH_IMAGE217
the number of time points is the classification loss of the graph convolution neural network;
Figure 921953DEST_PATH_IMAGE218
a classification loss, hyper-parameter, for the final time-domain convolutional neural network
Figure 557334DEST_PATH_IMAGE219
Are respectively controlledThe influence of node-making classification loss, time-segment classification loss and final classification loss is
Figure 854454DEST_PATH_IMAGE220
And is
Figure 182667DEST_PATH_IMAGE221
(ii) a All classification loss functions use a cross-entropy loss function, which is specifically expressed as:
Figure 244033DEST_PATH_IMAGE222
Figure 558471DEST_PATH_IMAGE223
represents the sample ofjThe true probability value of the seed class,
Figure 100311DEST_PATH_IMAGE224
representing the sample obtained from the modeljPredicted probability values for the species classes.
In the specific implementation process, a loss function consisting of node classification loss, time segment classification loss and final classification loss is provided, so that the classification capability of each partial module of the model and the classification capability of the final model are improved.
Example 3
More specifically, in step S3, the convolutional neural network-time domain convolutional neural network model may be tested, in the testing stage, the fMRI image is sampled in a sliding window manner, then all the sampling samples construct a convolutional network time sequence, the convolutional neural network-time domain convolutional neural network model provided in the present solution is input, and the obtained classification results of all the sampling samples are obtained in a simple voting manner to obtain the final classification result. In particular, assume a test fMRI image sample
Figure 22261DEST_PATH_IMAGE225
The length of the sampling segment is k frames,the sliding step length is m, and a sampling samples are finally obtained
Figure 524918DEST_PATH_IMAGE226
Figure 236522DEST_PATH_IMAGE227
Figure 819819DEST_PATH_IMAGE228
Figure 365201DEST_PATH_IMAGE229
Wherein
Figure 214208DEST_PATH_IMAGE230
. Inputting the prediction classification result into a model to obtain corresponding prediction classification results respectively
Figure 618251DEST_PATH_IMAGE231
Wherein
Figure 806787DEST_PATH_IMAGE232
The final simple voting results in classification
Figure 647704DEST_PATH_IMAGE233
In the following, taking Alzheimer's Disease (AD) as an example, using fMRI image data from the american large Alzheimer's Disease public database ADNI (Alzheimer's Disease Neuroimaging Initiative), 250 fMRI image data (121 ADs, 129 control groups) from 60 subjects (25 ADs, 35 control groups) are collected in total, that is, one subject may have a plurality of fMRI image data, and the above-described data are inputted as experimental data of the present invention into the model in the present application to evaluate the effect of the model and compare performance differences. The training data is input into the model as described above, and the model is then tested for performance using the test data set. To reduce the impact of dataset partitioning on the experimental results, this example employed five cross-validation methods to evaluate the performance of the model. To avoid data leakage, the data set is partitioned according to the subjects, i.e., multiple fMRI images of one subject only appear in the training set or the test set at the same time.
1. Parameter setting
During training, the Batch _ size is 32, the epochs are 200, parameters are updated by adopting an Adam gradient descent method, the learning rate is 0.001, the learning rate exponentially decreases along with the change of time,
Figure 108641DEST_PATH_IMAGE234
. And dividing partial data in the training set as a verification set, wherein in the training process, the parameter with the highest accuracy of the model in the verification set is used as the final parameter of the model. Time window m =10 when tested.
2. Results of the experiment
Table 1 shows the effect of different sample lengths on the model results, which gave the best generalization performance for a sample frame length of 64.
TABLE 1
Sampling frame length Rate of accuracy Standard deviation of
16 0.68 0.08
32 0.62 0.16
48 0.69 0.07
64 0.72 0.10
Table 2 shows the loss function for a sample length of 64
Figure 693207DEST_PATH_IMAGE235
Influence on model results in different values:
TABLE 2
Figure 736249DEST_PATH_IMAGE236
Figure 374166DEST_PATH_IMAGE237
Figure 932186DEST_PATH_IMAGE238
Rate of accuracy Standard deviation of
0 0 1 0.53 0.10
0 0.5 0.5 0.66 0.12
0.2 0.3 0.5 0.72 0.10
As can be seen from Table 2, the loss function designed by the present invention is effective compared to the loss function using only the final loss (
Figure 992546DEST_PATH_IMAGE239
) And loss of time slice and final loss (
Figure 342625DEST_PATH_IMAGE240
) The classification performance of the model trained by the loss function is improved to a certain extent. Finally, in order to verify the validity of the method for constructing functional connections between regions of interest based on the k-s verification method proposed in the present application, the traditional Pearson correlation method is used to construct functional connections as a comparison experiment, i.e., for each time point, the region of interest is constructediAnd a region of interestjThe connection strength of (A) is as follows:
Figure 525344DEST_PATH_IMAGE241
wherein the content of the first and second substances,
Figure 711606DEST_PATH_IMAGE242
and
Figure 637974DEST_PATH_IMAGE243
representing a region of interest for which pearson correlation analysis is to be performediAnd a region of interestjThe number of the BOLD signals of (a),
Figure 137832DEST_PATH_IMAGE244
respectively representing the interested regions at the t-th time pointiAnd a region of interestjAverage value of BOLD signal of (a). In a graph network time sequence constructed by the method, the edge weight of each graph is the same, and finally the five-fold cross validation obtained by the method has the average accuracy rate of 60% and the standard deviation of 5%. Compared with the method, the method provided by the invention has the advantages that the precision is improved by about 12 percent, and better effect is obtained, which shows that the method for constructing the time sequence of the graph network provided by the invention is effective. The method can be interpreted as that the method for constructing the network time sequence of the map based on the k-s test can effectively reflect the dynamic change of the correlation relationship between the brain area functions presented in the neurophysiological process along with the time change, and the traditional method based on the Pearson correlation is based on the construction of the brain function connection at all time points and cannot express the dynamic change mode.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. This need not be, nor should it be exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (9)

1. A medical image classification method based on a graph network time sequence is characterized by comprising the following steps:
s1: acquiring an original fMRI image, preprocessing and sampling to obtain an fMRI image sample;
s2: constructing a graph network time sequence capable of showing dynamic changes of functional connection among brain partitions based on a k-s verification method, and processing the fMRI image samples to obtain a graph network time sequence corresponding to each fMRI image sample;
in step S2, the fMRI image sample is composed of a plurality of time slices of each fMRI image, and for each time slice in one fMRI image, the process of obtaining the graph network time sequence corresponding to the fMRI image sample specifically includes:
s21: for a time slice, dividing a human brain into a plurality of interested areas according to a brain area division template; taking each interested area as a vertex to obtain a vertex set;
s22: taking the correlation among the vertexes of the vertex set as an edge, and checking the correlation between the vertexes as the strength of the edge based on a k-s verification method to obtain an edge set;
s23: constructing an undirected graph of the time slice according to the vertex set and the edge set;
s24: reselecting a time slice, repeatedly executing the steps S21-S24 to obtain an undirected graph of each time slice in an fMRI image, and obtaining a graph network time sequence corresponding to the fMRI image sample according to all the undirected graphs;
s3: constructing a graph convolution neural network-time domain convolution neural network model, and training and verifying the graph convolution neural network-time domain convolution neural network model by utilizing a graph network time sequence;
the graph convolution neural network-time domain convolution neural network model comprises a graph convolution neural network and a time domain convolution neural network, and the graph convolution neural network comprises a plurality of convolution pooling units, a full connection layer and a softmax classifier; the convolution pooling unit comprises a graph convolution layer, a self-attention map pooling layer and a reading layer, wherein the graph convolution neural network obtains graphs obtained by a plurality of self-attention map pooling layers, high-dimensional feature representations of different hierarchical graphs are obtained through the reading layer, the high-dimensional features of the graphs are added to obtain a final high-dimensional feature representation, the high-dimensional features are reconstructed through a full-connection layer, and the reconstructed features are used as the input of the time domain convolution neural network;
the input layer of the time domain convolution neural network is connected with the full connection layer of the graph convolution neural network, the input layer is output to a softmax classifier through the output layer after being processed by a plurality of TCN layers and expanded, and finally the classification result of the input graph is obtained through the softmax classifier;
s4: and inputting the fMRI images to be classified into the graph convolution neural network-time domain convolution neural network model which completes training and verification, so as to realize the classification of the medical images.
2. The method for classifying medical images based on graph network time series according to claim 1, wherein in step S1, raw fMRI images are preprocessed by DPARSF software.
3. The method for classifying medical images based on graph network time series according to claim 1, wherein in the step S1, the process of sampling the preprocessed fMRI image specifically comprises: assuming a time slice length of k for the sample, the calculation selects a start frame of
Figure 93603DEST_PATH_IMAGE001
Finally, sampling to obtain a sample segment of
Figure 343318DEST_PATH_IMAGE002
And repeating the steps to obtain a plurality of sample fragments to form the fMRI image sample.
4. The method according to claim 1, wherein in step S2, the vertex set is represented as
Figure 328592DEST_PATH_IMAGE003
In which
Figure 576034DEST_PATH_IMAGE004
Represents the first
Figure 646758DEST_PATH_IMAGE005
A region of interest (ROI) is formed,
Figure 700165DEST_PATH_IMAGE006
is the number of regions of interest; edge set by adjacency matrix
Figure 539945DEST_PATH_IMAGE007
It is shown that, among others,Nthe number of the vertices is represented as,
Figure 20604DEST_PATH_IMAGE008
is a vertex
Figure 250729DEST_PATH_IMAGE009
The strength of the middle edge; in particular, according to the region of interestiAnd the region of interest
Figure 107826DEST_PATH_IMAGE010
Obtained by k-s verification method of BOLD signalp-valueValue as vertex
Figure 802113DEST_PATH_IMAGE009
The intensity of the edge between, k-s verification method can be used to verify whether the data in the two regions of interest obey the same distribution ifp-valueThe smaller the value, the smaller the correlation between the two regions of interest; the above-mentionedp-valueThe calculation process of the value is specifically as follows:
setting region of interest
Figure 453674DEST_PATH_IMAGE011
Has a BOLD signal of
Figure 436673DEST_PATH_IMAGE012
Region of interest
Figure 831883DEST_PATH_IMAGE010
Has a BOLD signal of
Figure 380676DEST_PATH_IMAGE013
Wherein
Figure 468717DEST_PATH_IMAGE014
Are respectively the region of interest
Figure 735751DEST_PATH_IMAGE015
And the region of interest
Figure 606755DEST_PATH_IMAGE016
The number of BOLD signals of (a), the total number of BOLD signals of the two regions of interest
Figure 275634DEST_PATH_IMAGE017
(ii) a The region of interest
Figure 534577DEST_PATH_IMAGE011
The BOLD signals are sorted from small to large and are renumbered
Figure 23327DEST_PATH_IMAGE018
The sorted BOLD signals:
Figure 25918DEST_PATH_IMAGE019
obtaining non-descending order interested region
Figure 486986DEST_PATH_IMAGE010
BOLD signal of (a):
Figure 651251DEST_PATH_IMAGE020
is provided with
Figure 892877DEST_PATH_IMAGE021
Is likeRegion of interest
Figure 433579DEST_PATH_IMAGE011
Empirical distribution function of (2):
Figure 749154DEST_PATH_IMAGE022
wherein, the first and the second end of the pipe are connected with each other,
Figure 349900DEST_PATH_IMAGE023
is a region of interest
Figure 813242DEST_PATH_IMAGE011
In (C) is less than or equal to
Figure 157636DEST_PATH_IMAGE024
The number of BOLD signals; obtaining the region of interest by the same method
Figure 655613DEST_PATH_IMAGE025
Empirical distribution function of
Figure 99364DEST_PATH_IMAGE026
Figure 315582DEST_PATH_IMAGE027
Wherein the content of the first and second substances,
Figure 198087DEST_PATH_IMAGE028
is a region of interest
Figure 550571DEST_PATH_IMAGE029
Is less than or equal to
Figure 227540DEST_PATH_IMAGE030
The number of BOLD signals;
computing k-s proof of authenticityVerification statistics of methods
Figure 603158DEST_PATH_IMAGE031
Figure 289354DEST_PATH_IMAGE032
Figure 496345DEST_PATH_IMAGE033
Wherein, the first and the second end of the pipe are connected with each other,
Figure 609794DEST_PATH_IMAGE034
is a region of interest
Figure 535025DEST_PATH_IMAGE035
Empirical distribution of BOLD signals
Figure 759333DEST_PATH_IMAGE036
Of interest
Figure 24092DEST_PATH_IMAGE037
Empirical distribution of BOLD signals
Figure 308443DEST_PATH_IMAGE038
The maximum value of the absolute value of the difference, and finally, the region of interest is calculated
Figure 455390DEST_PATH_IMAGE035
And a region of interest
Figure 483389DEST_PATH_IMAGE037
K-s verification of BOLD signalp-value value
Figure 664972DEST_PATH_IMAGE039
Figure 57907DEST_PATH_IMAGE040
Where Z is the validation statistic and e is a natural constant.
5. The method according to claim 1, wherein the step S3 specifically comprises the following steps:
s31: respectively constructing a graph convolution neural network and a time domain convolution neural network, and forming the graph convolution neural network and the time domain convolution neural network into a graph convolution neural network-time domain convolution neural network model;
s32: taking one part of the graph network time sequence as a training set, and taking the rest part of the graph network time sequence as a verification set;
s33: training a graph convolution neural network-time domain convolution neural network model by using a training set;
s34: in the training process, the graph convolution neural network-time domain convolution neural network model is verified through a verification set, and the parameters with the highest accuracy in the verification set are used as the parameters of the graph convolution neural network-time domain convolution neural network model to complete the training of the graph convolution neural network-time domain convolution neural network model;
in the training process, the graph characteristics of the graph network time sequence are extracted by the constructed graph convolution neural network, and the graph characteristics are input into the time domain convolution neural network to obtain a classification result.
6. The method according to claim 5, wherein in step S2, the mean and standard deviation of BOLD signals in the region of interest are extracted as the vertex features to obtain a vertex attribute matrix; in the step S3, the graph network time sequence input by the graph convolution neural network is set to contain a vertex attribute matrix
Figure 957730DEST_PATH_IMAGE041
And adjacency matrix
Figure 523840DEST_PATH_IMAGE042
Wherein N is the number of vertexes, and M is the number of vertex attributes; the operation of the graph convolution layer is specifically as follows:
Figure 559930DEST_PATH_IMAGE043
wherein the content of the first and second substances,
Figure 186083DEST_PATH_IMAGE044
is that
Figure 245306DEST_PATH_IMAGE045
An order identity matrix;
Figure 615107DEST_PATH_IMAGE046
is a diagonal matrix, representing the degrees of each vertex,
Figure 771282DEST_PATH_IMAGE047
Figure 568337DEST_PATH_IMAGE048
representative matrix
Figure 177173DEST_PATH_IMAGE049
The elements of row i and column j,
Figure 22769DEST_PATH_IMAGE050
representative matrix
Figure 33450DEST_PATH_IMAGE051
The element of the ith row and ith column,
Figure 1406DEST_PATH_IMAGE052
is the first
Figure 97538DEST_PATH_IMAGE053
Of a layerNode embedding, if the node of layer 0 is characterized by
Figure 746825DEST_PATH_IMAGE054
Then, then
Figure 877593DEST_PATH_IMAGE055
Figure 16450DEST_PATH_IMAGE056
Is a learnable weight parameter;
the self-attention-seeking pooling layer needs to obtain the degree of importance of each layer of nodes, called self-attention of the nodes, and then before ranking the attention score weightsKAre reserved to formTop-KA node; first calculate the self-attention score
Figure 334299DEST_PATH_IMAGE057
Where N is the number of vertices:
Figure 849594DEST_PATH_IMAGE058
wherein
Figure 772550DEST_PATH_IMAGE059
Is a learnable self-attention weight; selecting in a node selection mode according to the self-attention scoresTop-KThe node, which retains a part of the input graph network time sequence, specifically is:
Figure 82309DEST_PATH_IMAGE060
wherein the content of the first and second substances,
Figure 887454DEST_PATH_IMAGE061
an index representing a reservation node;
Figure 940861DEST_PATH_IMAGE062
presentation selection
Figure 718324DEST_PATH_IMAGE063
Before ranking
Figure 198984DEST_PATH_IMAGE064
The node of (2);
Figure 757004DEST_PATH_IMAGE065
for pooling rate, the percentage of the number of nodes to be retained is expressed and obtained before the self-attention value
Figure 348522DEST_PATH_IMAGE066
Large node index, then Masking operation is performed:
Figure 246071DEST_PATH_IMAGE067
wherein the content of the first and second substances,
Figure 897632DEST_PATH_IMAGE068
indicating that the node holding the index as mask is embedded,
Figure 677369DEST_PATH_IMAGE069
indicating the attention score corresponding to the retention node,
Figure 72579DEST_PATH_IMAGE070
which means that the multiplication is performed in bits,
Figure 824634DEST_PATH_IMAGE071
an adjacency matrix representing the reserved nodes is shown,
Figure 647097DEST_PATH_IMAGE072
Figure 914130DEST_PATH_IMAGE073
a node embedding and adjacency matrix representing outputs from the attention pooling layer;
the readout layer aggregates the node features to form a fixed-size representation, resulting in a high-dimensional representation of the graph, and the readout layer outputs are specifically characterized by:
Figure 113030DEST_PATH_IMAGE074
wherein the content of the first and second substances,Nthe number of vertices is represented as a function of,
Figure 781909DEST_PATH_IMAGE075
denotes the l th layeriEmbedding nodes of each node, | | represents splicing operation of the features, and the read-out layer is actually a global average pooling layer and a global maximum pooling layer to obtain splicing of the features;
in order to realize the reconstruction output of the data, the forward propagation process of the full connection layer is as follows:
Figure 712956DEST_PATH_IMAGE076
Figure 467285DEST_PATH_IMAGE077
Figure 469876DEST_PATH_IMAGE078
are respectively the first
Figure 993261DEST_PATH_IMAGE079
The learnable weight matrix and the learnable bias for the fully connected one of the layers,
Figure 106928DEST_PATH_IMAGE080
and
Figure 348554DEST_PATH_IMAGE081
respectively represent the first
Figure 889256DEST_PATH_IMAGE079
And finally, obtaining a final classification result through a softmax classifier by the number of neurons of the layer full connection layer and the number of neurons of the l +1 th layer full connection layer:
Figure 267148DEST_PATH_IMAGE082
wherein the content of the first and second substances,
Figure 867894DEST_PATH_IMAGE083
Figure 268919DEST_PATH_IMAGE084
is the number of neurons of the l-th fully-connected layer,
Figure 613313DEST_PATH_IMAGE085
is the number of categories.
7. The method according to claim 6, wherein in step S3, each TCN layer transforms its input dimension size to be consistent with its output dimension size through a one-dimensional full convolution structure, and its forward propagation process is as follows:
Figure 845711DEST_PATH_IMAGE086
sequence data is formed by splicing output vectors of assumed full connection layers
Figure 617358DEST_PATH_IMAGE087
Wherein
Figure 505680DEST_PATH_IMAGE088
H is the length of the time slice, and the number of the neurons of the full connection layer; will be provided with
Figure 388185DEST_PATH_IMAGE089
Inputting the time slice into a TCN layer, outputting and expanding the time slice into a one-dimensional vector through an expansion layer after passing through a plurality of TCN layers, and finally classifying the time slice through a softmax classifier to obtain a classification result of the time slice.
8. The method for classifying medical images based on graph network time series according to claim 7, wherein in said step S3, TCN layer of time domain convolution neural network is composed of causal convolution and dilation convolution, wherein:
in causal convolution, an element of an output sequence depends only on elements preceding it in the input sequence, and for time series data, a value at a time T of a previous layer depends only on values at and before a time T of a next layer, that is:
Figure 740669DEST_PATH_IMAGE090
wherein the content of the first and second substances,
Figure 683217DEST_PATH_IMAGE091
the output at time T of the causal convolution is shown,
Figure 58835DEST_PATH_IMAGE092
a feature vector representing layer i time 1 to time T; the expansion convolution refers to performing convolution operation by using a discontinuous neuron with the same size as a convolution kernel; the expansion convolution has an expansion coefficientdThe method is used for controlling the discontinuity degree of neurons participating in convolution operation, and the calculation formula of the dilation convolution is as follows:
Figure 745031DEST_PATH_IMAGE093
wherein the content of the first and second substances,dwhich is indicative of the coefficient of expansion,
Figure 952022DEST_PATH_IMAGE094
which represents the size of the convolution kernel,
Figure 65471DEST_PATH_IMAGE095
represents the weight of the i-th term of the convolution kernel whendAt 1, the dilated convolution degenerates to the normal convolution, controlled bydSo as to enlarge the receptive field under the premise of unchanged calculated amount.
9. The method according to claim 8, wherein in the atlas network-temporal convolutional neural network model constructed in the step S3, the loss function is composed of three parts, which are node classification loss, time segment classification loss and final classification loss, and the loss function is specifically expressed as:
Figure 928385DEST_PATH_IMAGE096
wherein the content of the first and second substances,
Figure 152693DEST_PATH_IMAGE097
is the node classification loss of the jth node at the ith time,
Figure 479769DEST_PATH_IMAGE098
Figure 498541DEST_PATH_IMAGE099
Figure 848750DEST_PATH_IMAGE100
is the loss of time slice classification at the ith time instant,
Figure 876749DEST_PATH_IMAGE101
Figure 58332DEST_PATH_IMAGE102
the number of the time is the classification loss of the graph convolution neural network;
Figure 513584DEST_PATH_IMAGE103
a classification loss, hyper-parameter, for the final time-domain convolutional neural network
Figure 85511DEST_PATH_IMAGE104
The influence of the classification loss of the control node, the classification loss of the time segment and the final classification loss respectively has
Figure 651621DEST_PATH_IMAGE105
And is provided with
Figure 953290DEST_PATH_IMAGE106
(ii) a All classification loss functions use cross-entropy loss functions, which are specifically expressed as:
Figure 517126DEST_PATH_IMAGE107
Figure 638666DEST_PATH_IMAGE108
represents the sample ofjThe true probability value of the seed class,
Figure 8467DEST_PATH_IMAGE109
representing the sample obtained from the modeljPredicted probability values for the species classes.
CN202210814372.2A 2022-07-12 2022-07-12 Medical image classification method based on graph network time sequence Active CN115222688B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210814372.2A CN115222688B (en) 2022-07-12 2022-07-12 Medical image classification method based on graph network time sequence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210814372.2A CN115222688B (en) 2022-07-12 2022-07-12 Medical image classification method based on graph network time sequence

Publications (2)

Publication Number Publication Date
CN115222688A CN115222688A (en) 2022-10-21
CN115222688B true CN115222688B (en) 2023-01-10

Family

ID=83612470

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210814372.2A Active CN115222688B (en) 2022-07-12 2022-07-12 Medical image classification method based on graph network time sequence

Country Status (1)

Country Link
CN (1) CN115222688B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116030308B (en) * 2023-02-17 2023-06-09 齐鲁工业大学(山东省科学院) Multi-mode medical image classification method and system based on graph convolution neural network
CN115909016B (en) * 2023-03-10 2023-06-23 同心智医科技(北京)有限公司 GCN-based fMRI image analysis system, method, electronic equipment and medium
CN117435995B (en) * 2023-12-20 2024-03-19 福建理工大学 Biological medicine classification method based on residual map network

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855491A (en) * 2012-07-26 2013-01-02 中国科学院自动化研究所 Brain function magnetic resonance image classification method based on network centrality
CN110720906A (en) * 2019-09-25 2020-01-24 上海联影智能医疗科技有限公司 Brain image processing method, computer device, and readable storage medium
WO2021001238A1 (en) * 2019-07-01 2021-01-07 Koninklijke Philips N.V. Fmri task settings with machine learning
CN112766332A (en) * 2021-01-08 2021-05-07 广东中科天机医疗装备有限公司 Medical image detection model training method, medical image detection method and device
CN113080847A (en) * 2021-03-17 2021-07-09 天津大学 Device for diagnosing mild cognitive impairment based on bidirectional long-short term memory model of graph
CN113592836A (en) * 2021-08-05 2021-11-02 东南大学 Deep multi-modal graph convolution brain graph classification method
CN114241240A (en) * 2021-12-15 2022-03-25 中国科学院深圳先进技术研究院 Method and device for classifying brain images, electronic equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108364006B (en) * 2018-01-17 2022-03-08 超凡影像科技股份有限公司 Medical image classification device based on multi-mode deep learning and construction method thereof
CN111667459B (en) * 2020-04-30 2023-08-29 杭州深睿博联科技有限公司 Medical sign detection method, system, terminal and storage medium based on 3D variable convolution and time sequence feature fusion
US20220122250A1 (en) * 2020-10-19 2022-04-21 Northwestern University Brain feature prediction using geometric deep learning on graph representations of medical image data

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855491A (en) * 2012-07-26 2013-01-02 中国科学院自动化研究所 Brain function magnetic resonance image classification method based on network centrality
WO2021001238A1 (en) * 2019-07-01 2021-01-07 Koninklijke Philips N.V. Fmri task settings with machine learning
CN110720906A (en) * 2019-09-25 2020-01-24 上海联影智能医疗科技有限公司 Brain image processing method, computer device, and readable storage medium
CN112766332A (en) * 2021-01-08 2021-05-07 广东中科天机医疗装备有限公司 Medical image detection model training method, medical image detection method and device
CN113080847A (en) * 2021-03-17 2021-07-09 天津大学 Device for diagnosing mild cognitive impairment based on bidirectional long-short term memory model of graph
CN113592836A (en) * 2021-08-05 2021-11-02 东南大学 Deep multi-modal graph convolution brain graph classification method
CN114241240A (en) * 2021-12-15 2022-03-25 中国科学院深圳先进技术研究院 Method and device for classifying brain images, electronic equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
BrainGNN: Interpretable Brain Graph Neural Network for fMRI Analysis;Xiaoxiao Li等;《Medical Image Analysis》;20210912;第1-13页 *
Discovery of Genetic Biomarkers for Alzheimer’s Disease Using Adaptive Convolutional Neural Networks Ensemble and Genome‑Wide Association Studies;An Zeng等;《Interdisciplinary Sciences: Computational Life Sciences》;20210819;第787-800页 *
医学图像深度学习技术:从卷积到图卷积的发展;唐朝生等;《中国图象图形学报》;20210602;第26卷(第09期);第2078-2093页 *
基于3D卷积神经网络-感兴区域的阿尔茨海默症辅助诊断模型;曾安等;《生物医学工程研究》;20201231;第39卷(第2期);第133-138页 *

Also Published As

Publication number Publication date
CN115222688A (en) 2022-10-21

Similar Documents

Publication Publication Date Title
CN115222688B (en) Medical image classification method based on graph network time sequence
KR102125127B1 (en) Method of brain disorder diagnosis via deep learning
CN109345538A (en) A kind of Segmentation Method of Retinal Blood Vessels based on convolutional neural networks
Kar et al. Retinal vessel segmentation using multi-scale residual convolutional neural network (MSR-Net) combined with generative adversarial networks
Hong et al. Brain age prediction of children using routine brain MR images via deep learning
Wang et al. Ensemble of 3D densely connected convolutional network for diagnosis of mild cognitive impairment and Alzheimer’s disease
CN112037179B (en) Method, system and equipment for generating brain disease diagnosis model
CN112465905A (en) Characteristic brain region positioning method of magnetic resonance imaging data based on deep learning
Torres et al. Evaluation of interpretability for deep learning algorithms in EEG emotion recognition: A case study in autism
Liu et al. An enhanced multi-modal brain graph network for classifying neuropsychiatric disorders
CN110148145A (en) A kind of image object area extracting method and application merging boundary information
Wang et al. Classification of structural MRI images in Adhd using 3D fractal dimension complexity map
Bayram et al. Deep learning methods for autism spectrum disorder diagnosis based on fMRI images
CN115272295A (en) Dynamic brain function network analysis method and system based on time domain-space domain combined state
Qiang et al. A deep learning method for autism spectrum disorder identification based on interactions of hierarchical brain networks
CN112036298A (en) Cell detection method based on double-segment block convolutional neural network
CN110400610B (en) Small sample clinical data classification method and system based on multichannel random forest
Seshadri Ramana et al. Deep convolution neural networks learned image classification for early cancer detection using lightweight
Jung et al. Inter-regional high-level relation learning from functional connectivity via self-supervision
Kong et al. Data enhancement based on M2-Unet for liver segmentation in Computed Tomography
CN112861881A (en) Honeycomb lung recognition method based on improved MobileNet model
Mareeswari et al. A survey: Early detection of Alzheimer’s disease using different techniques
Huang et al. DBFU-Net: Double branch fusion U-Net with hard example weighting train strategy to segment retinal vessel
Jacaruso Accuracy improvement for Fully Convolutional Networks via selective augmentation with applications to electrocardiogram data
Castro et al. Development of a deep learning-based brain-computer interface for visual imagery recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant