CN115222688A - Medical image classification method based on graph network time sequence - Google Patents
Medical image classification method based on graph network time sequence Download PDFInfo
- Publication number
- CN115222688A CN115222688A CN202210814372.2A CN202210814372A CN115222688A CN 115222688 A CN115222688 A CN 115222688A CN 202210814372 A CN202210814372 A CN 202210814372A CN 115222688 A CN115222688 A CN 115222688A
- Authority
- CN
- China
- Prior art keywords
- graph
- neural network
- layer
- time
- convolution neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a medical image classification method based on a graph network time sequence, which comprises the following steps: acquiring an fMRI image sample; constructing a graph network time sequence capable of showing dynamic changes of functional connection among brain partitions based on a k-s verification method, and processing the fMRI image samples to obtain a graph network time sequence corresponding to each fMRI image sample; and constructing a graph convolution neural network-time domain convolution neural network model, training and verifying, and finally classifying the medical images by utilizing the graph convolution neural network-time domain convolution neural network model. The medical image classification method provided by the invention realizes the projection of the brain function network connection dynamic change rule; the method provides a graph convolution neural network-time domain convolution neural network model, is beneficial to the extraction of graph characteristics and the learning of change rules in a graph network time sequence, and effectively improves the classification capability of the model.
Description
Technical Field
The invention relates to the technical field of computer analysis of medical images, in particular to a medical image classification method based on a graph network time sequence.
Background
With the development of modern medicine, medical images play an increasingly important role in the auxiliary diagnosis and treatment of diseases. A large number of studies indicate that many neuropsychiatric diseases (such as AD and schizophrenia) are related to topological changes of brain structures and functional networks, and recently, human brain connectivity (Human connectiome) is proposed to mainly study dynamic complex functional networks formed by brain connections on a wide space-time scale so as to better understand the pathological basis of the neuropsychiatric diseases of the brain and further help to understand the working mechanism in the brain. Among them, functional Magnetic Resonance Imaging (fMRI) has both high temporal resolution and spatial resolution, and provides an important means for studying the functions of the human brain, and has become a research hotspot and difficulty of human brain connectivity omics. However, at the same time, fMRI images themselves are susceptible to noise interference and have high data dimensionality, which causes great difficulty in data processing and analysis. Aiming at the characteristics of the fMRI image, more valuable information can be mined by utilizing a deep learning method and a data driving mode, and the process of manually processing and analyzing data is simplified, so that the burden of doctors and researchers is reduced.
In the medical image classification method based on fMRI, a brain function network is constructed by using fMRI based on brain connectivity, and classification is performed according to a topological structure and various network parameters in the brain function network. However, the method only constructs a brain function network for an individual brain by using the BOLD signal time sequence contained in an fMRI image, and does not utilize the associated information of the BOLD signal time sequence contained in the fMRI image on the spatial dimension to the maximum extent, so that the dynamic changes of the associated relationship among different brain areas along with the time change in the neurophysiological process cannot be reflected, and the changing trends may play a critical role in the classification of fMRI.
The prior art discloses a brain network classification method based on a atlas neural network. The method comprises the following steps: firstly, extracting BOLD signals of all brain areas from an fMRI image; secondly, constructing a brain map capable of reflecting the topological structure characteristics of functional connection between brain areas; thirdly, inputting the constructed brain network and the actual diagnosis label into a graph volume neural network for feature learning and model training. According to the method, a brain network is constructed through the fMRI image, feature learning and classification are carried out based on the brain network, important information hidden in the image can be ignored, and dynamic changes of correlation relations among different brain areas along with time changes in the neurophysiological process cannot be reflected.
The prior art discloses a training method and apparatus, a computer device and a storage medium for constructing a network model based on fMRI. The method comprises the following steps: sampling and preprocessing original fMRI image data; establishing a 3D-CNN + LSTM model; creating an fMRI image segment as a first training data set, and using the fMRI segment with the minimum loss value in the first training data set as a second training data set; and training the 4D-CNN model by applying a second test data set and outputting a classification result. The two convolution neural models adopted by the method can extract time and space information in the fMRI image, but the two models have more parameters, the input fMRI image has high dimensionality, only a short time segment can be selected as the model input, and long-time dynamic change information in the fMRI image cannot be acquired.
Disclosure of Invention
In order to solve at least one technical defect, the invention provides a medical image classification method based on a graph network time sequence, which can reflect the dynamic change rule of brain function network connection.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a medical image classification method based on a graph network time sequence comprises the following steps:
s1: collecting an original fMRI image, preprocessing and sampling to obtain an fMRI image sample;
s2: constructing a graph network time sequence capable of showing dynamic changes of functional connection among brain partitions based on a k-s (Kolmogorov-Smirnov) verification method, and processing the fMRI image samples to obtain a graph network time sequence corresponding to each fMRI image sample;
s3: constructing a graph convolution neural network-time domain convolution neural network model, and training and verifying the graph convolution neural network-time domain convolution neural network model by utilizing a graph network time sequence;
s4: and inputting the fMRI images to be classified into the graph convolution neural network-time domain convolution neural network model which completes training and verification, so as to realize the classification of the medical images.
In the scheme, a graph network time sequence capable of showing the dynamic change of the functional connection between the brain partitions is constructed through the fMRI image, so that the showing of the dynamic change rule of the brain functional network connection is realized; meanwhile, a graph convolution neural network-time domain convolution neural network model is provided, so that extraction of graph features and learning of change rules in a graph network time sequence are facilitated, and classification capability of the model is effectively improved.
In the step S1, the raw fMRI image is preprocessed by using DPARSF software.
In the image acquisition process, factors such as head movement, respiration, heartbeat and the like of a testee can generate noise, so that the imaging quality of an image is deteriorated, therefore, in the data analysis process, preprocessing is carried out firstly, the influence of irrelevant noise is reduced, the signal to noise ratio is improved, and the preprocessing process is realized by using DPARSF software.
In step S1, the process of sampling the preprocessed fMRI image specifically includes: assume a sampled time slice length of kThe calculation selects the start frame asFinally, sampling to obtain a sample segment ofAnd repeating the steps to obtain a plurality of sample fragments to form the fMRI image sample.
Generally speaking, training a deep learning model from scratch requires a large number of fMRI image samples, and for fMRI images, it is often difficult to obtain a large number of fMRI image samples for model training; therefore, the scheme provides a method for increasing the number of samples by dividing the fMRI image samples into shorter segments, so that the fMRI image sample data is enhanced, the number of training samples is greatly increased, and the training effect of the model is improved.
In step S2, the fMRI image sample is composed of a plurality of time slices of each fMRI image, and for each time slice in one fMRI image, the process of obtaining the graph network time sequence corresponding to the fMRI image sample specifically includes:
s21: for a time slice, dividing a human brain into a plurality of interested areas according to a brain area division template; taking each interested area as a vertex to obtain a vertex set;
s22: taking the correlation among the vertexes of the vertex set as an edge, and checking the correlation between the vertexes as the strength of the edge based on a k-s verification method to obtain an edge set;
s23: constructing an undirected graph of the time slice according to the vertex set and the edge set;
s24: and reselecting a time slice, repeatedly executing the steps S21-S24 to obtain an undirected graph of each time slice in the fMRI image, and obtaining the graph network time sequence corresponding to the fMRI image sample according to all the undirected graphs.
Wherein, in the step S2, the vertex set is expressed asIn whichRepresents the firstA region of interest (ROI) is formed,is the number of regions of interest; edge set adjacency matrixIt is shown that, among others,Nthe number of vertices is represented as a function of,is a vertexThe strength of the middle edge; in particular, according to the region of interestAnd the region of interestObtained by verifying the k-s verification method of the BOLD signalp-valueValue as vertexThe intensity of the edge between, k-s, verification method can be used to verify whether the data in the two regions of interest obey the same distribution ifp-valueThe smaller the value, the smaller the correlation between the two regions of interest; the above-mentionedp-valueThe calculation process of the value is specifically as follows:
setting region of interestHas a BOLD signal ofRegion of interestHas a BOLD signal ofWhereinRespectively, regions of interestAnd the region of interestThe total number of the BOLD signals of the two interested areas is(ii) a The region of interestThe BOLD signals are sorted from small to large and are renumberedThe sorted BOLD signals:obtaining non-descending order interested regionBOLD signal of (a):;
wherein the content of the first and second substances,is a region of interestIs less than or equal toThe number of BOLD signals; obtaining the region of interest by the same methodEmpirical distribution function of:
Wherein the content of the first and second substances,is a region of interestIs less than or equal toThe number of BOLD signals;
Wherein the content of the first and second substances,is a region of interestEmpirical distribution of BOLD signalsOf interestEmpirical distribution of BOLD signalsThe maximum value of the absolute value of the difference, and finally, the region of interest is calculatedAnd a region of interestK-s verification of BOLD signalsp-value value:
Where Z is the validation statistic and e is a natural constant.
Wherein, the step S3 specifically includes the following steps:
s31: respectively constructing a graph convolution neural network and a time domain convolution neural network, and forming the graph convolution neural network and the time domain convolution neural network into a graph convolution neural network-time domain convolution neural network model;
s32: taking one part of the graph network time sequence as a training set, and taking the rest part as a verification set;
s33: training a graph convolution neural network-time domain convolution neural network model by using a training set;
s34: in the training process, the graph convolution neural network-time domain convolution neural network model is verified through a verification set, and the parameters with the highest accuracy in the verification set are used as the parameters of the graph convolution neural network-time domain convolution neural network model to complete the training of the graph convolution neural network-time domain convolution neural network model;
in the training process, the graph characteristics of the graph network time sequence are extracted by the constructed graph convolution neural network, and the graph characteristics are input into the time domain convolution neural network to obtain a classification result.
In the step S2, extracting an average value and a standard deviation of BOLD signals of the region of interest as features of a vertex of the BOLD signals to obtain a vertex attribute matrix; in the step S3, the graph convolution neural network comprises a plurality of convolution pooling units, a full connection layer and a softmax classifier; the convolution pooling unit comprises a graph convolution layer, a self-attention graph pooling layer and a readout layer; setting graph network time sequence of graph convolution neural network input to contain vertex attribute matrixAnd adjacency matrixWherein, in the process,is the number of the vertices,is the number of vertex attributes; the operation of the graph convolution layer is specifically as follows:
wherein the content of the first and second substances,is thatAn order identity matrix;is a diagonal matrix, representing the degrees of each vertex,,representative matrixThe elements of row i and column j,representative matrixThe element of the ith row and ith column,is the firstNode embedding of layer if the node of layer 0 is characterized byThen, then,Is a learnable weight parameter;
the self-attention-graph pooling layer needs to obtain the importance degree of each layer of nodes, called self-attention of the nodes, and then before ranking the attention score weightkIs reserved to formTop-KA node; first calculate the self-attention scoreWherein N is the number of nodes:
whereinIs a learnable self-attention weight; selecting in a node selection mode according to the self-attention scoreTop-KThe node reserves a part of the input graph network time sequence, and specifically comprises:
wherein the content of the first and second substances,an index representing a reservation node;presentation selectionBefore rankingA node of (2); pooling ratioRepresenting the percentage of nodes to be retained, before deriving the self-attentiveness valueLarge node index, then Masking operation is performed:
wherein the content of the first and second substances,indicating node embedding with reserved index mask,indicating the attention score corresponding to the retention node,meaning that the multiplication is performed in bits,an adjacency matrix representing the reserved nodes is shown,,a node embedding and adjacency matrix representing outputs from the attention pooling layer;
the readout layer aggregates the node features to form a representation of a fixed size to obtain a high-dimensional representation of the graph, and the output of the readout layer is specifically characterized by:
wherein, the first and the second end of the pipe are connected with each other,Nthe number of the nodes is represented as,denotes the lLayer oneiEmbedding nodes of each node, | | represents splicing operation of the features, and the read-out layer is actually a global average pooling layer and a global maximum pooling layer to obtain splicing of the features;
in order to realize the reconstruction output of data, the forward propagation process of the full connection layer is as follows:
,are respectively the firstThe learnable weight matrix and the learnable bias for the fully connected one of the layers,andrespectively representing the number of neurons of the l-th layer of full-junction layer and the number of neurons of the l + 1-th layer of full-junction layer, and finally obtaining a final classification result through a softmax classifier:
wherein the content of the first and second substances,,is the number of neurons of the l-th fully-connected layer,is the number of categories; the graph convolution neural network obtains a plurality of graphs obtained by self-attention graph pooling layers, obtains high-dimensional feature representations of different hierarchical graphs through reading layers, adds the high-dimensional features to obtain a final high-dimensional feature representation, reconstructs the high-dimensional features through a full connection layer, uses the reconstructed features as input of a time domain convolution neural network, and finally obtains a classification result of an input graph through a softmax classifier.
In the step S3, an input layer of the time-domain convolutional neural network is connected to a full-connection layer of the graph convolutional neural network, processed by a plurality of TCN layers, and output by an output layer to the softmax classifier after being processed by an expansion layer, where each TCN layer converts the input dimension size into a dimension consistent with the output dimension size through a one-dimensional full-convolution structure, and the forward propagation process is as follows:
sequence data is formed by splicing output vectors of assumed full connection layersWhereinIs the length of the time slice or slices,the number of neurons in the full junction layer; will be provided withInputting the time slice into a TCN layer, outputting and expanding the time slice into a one-dimensional vector through an expansion layer after passing through a plurality of TCN layers, and finally classifying the time slice through a softmax classifier to obtain a classification result of the time slice.
Wherein, in the step S3, the TCN layer of the time-domain convolutional neural network is composed of a causal convolution and an expansion convolution, wherein:
is due toIn the convolution, the element of the output sequence depends only on the element preceding it in the input sequence, which is the previous layer at a certain time for the time series dataIs dependent only on the next layerThe values at and before time, namely:
wherein the content of the first and second substances,the output at time T of the causal convolution is shown,a feature vector representing layer i time 1 to time T; the expansion convolution refers to performing convolution operation by using discontinuous neurons with the same size as a convolution kernel; the expansion convolution has a expansion coefficientdThe method is used for controlling the discontinuity degree of neurons participating in convolution operation, and the calculation formula of the dilation convolution is as follows:
wherein the content of the first and second substances,ethe coefficient of expansion is expressed in terms of,which represents the size of the convolution kernel,weight of the i-th term of the convolution kernel wheneAt 1, the dilated convolution degenerates to the normal convolution, controlled byeSo as to enlarge the receptive field under the premise of unchanged calculated amount.
In the graph convolution neural network-time domain convolution neural network model constructed in the step S3, the loss function is composed of three parts, which are respectively node classification loss, time segment classification loss, and final classification loss, and the loss function is specifically expressed as:
wherein, the first and the second end of the pipe are connected with each other,is the node classification loss of the jth node at the ith time point,,in this scheme, a self-attention pooling layer is applied in the graph convolution neural network, so that only the final for each graph is retainedTop-KAnd the loss function is also only calculatedTop-KClassification loss of nodes;is the firstiThe time slice classification of a time point is lost,,the number of time points is the classification loss of the graph convolution neural network;a classification loss, hyper-parameter, for the final time-domain convolutional neural networkClassification losses for control nodes, time segments, andthe final classification loss hasAnd is(ii) a All classification loss functions use cross-entropy loss functions, which are specifically expressed as:
represents the sample ofjThe true probability value of the seed class,representing the sample obtained from the modeljPredicted probability values for the species classes.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the invention provides a medical image classification method based on a graph network time sequence, which constructs the graph network time sequence capable of showing the dynamic change of functional connection between brain partitions through fMRI images, and realizes the showing of the dynamic change rule of the brain functional network connection; meanwhile, a graph convolution neural network-time domain convolution neural network model is provided, so that the extraction of graph characteristics and the learning of change rules in a graph network time sequence are facilitated, and the classification capability of the model is effectively improved.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a detailed schematic diagram of the graph convolutional neural network-time domain convolutional neural network model according to the present invention;
FIG. 3 is a detailed schematic diagram of the convolutional neural network of the present invention;
fig. 4 is a specific schematic diagram of the time domain convolutional neural network according to the present invention.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described with reference to the drawings and the embodiments.
Example 1
As shown in fig. 1, a medical image classification method based on graph network time series includes the following steps:
s1: collecting an original fMRI image, preprocessing and sampling to obtain an fMRI image sample;
s2: constructing a graph network time sequence capable of showing dynamic changes of functional connection among brain partitions based on a k-s verification method, and processing the fMRI image samples to obtain a graph network time sequence corresponding to each fMRI image sample;
s3: constructing a graph convolution neural network-time domain convolution neural network model, and training and verifying the graph convolution neural network-time domain convolution neural network model by utilizing a graph network time sequence;
s4: and inputting the fMRI image to be classified into the graph convolution neural network-time domain convolution neural network model which completes training and verification, so as to realize classification of the medical image.
In the specific implementation process, a graph network time sequence capable of showing the dynamic change of the functional connection between the brain partitions is constructed through the fMRI images, so that the showing of the dynamic change rule of the brain functional network connection is realized; meanwhile, a graph convolution neural network-time domain convolution neural network model is provided, so that the extraction of graph characteristics and the learning of change rules in a graph network time sequence are facilitated, and the classification capability of the model is effectively improved.
More specifically, in step S1, the raw fMRI image is preprocessed by DPARSF software.
In the image acquisition process, the testee moves, factors such as breathing, heartbeat, can produce the noise, leads to the imaging quality variation of image, consequently when data analysis, carries out the preliminary treatment earlier, reduces the influence of irrelevant noise, improves the SNR, and this scheme uses DPARSF software to realize the preliminary treatment process, specifically includes:
first removing the first 10 frames of data of each fMRI image sample to obtain a stable signal; second, each slice is time-corrected to ensure that the data on each slice corresponds to the same point in time. After the temporal correction, the spatial correction is continued, and each frame image of each subject is realigned with its average image and normalized spatially to MNI (Montreal Neurological Institute) space, thereby eliminating the inter-individual difference; all images were spatially smoothed using a 4 × 4mm 3 full-width half-height gaussian kernel; linear trend removal and low frequency filtering (0.01 Hz-0.08 Hz); covariate regression analysis, the interference factors eliminated include cerebrospinal fluid, white matter signals, and head movements.
More specifically, in step S1, the process of sampling the preprocessed fMRI image specifically includes: assuming a time slice length of k for the sample, the calculation selects a start frame ofFinally, sampling to obtain a sample segment ofAnd repeating the steps to obtain a plurality of sample fragments to form the fMRI image sample.
Generally speaking, training a deep learning model from scratch requires a large number of fMRI image samples, and for fMRI images, it is often difficult to obtain a large number of fMRI image samples for model training; therefore, the scheme provides a method for increasing the number of samples by segmenting the fMRI image samples into shorter segments, so that the fMRI image sample data is enhanced, the number of training samples is greatly increased, and the training effect of the model is improved.
More specifically, in step S2, the fMRI image sample is composed of a plurality of time slices of each fMRI image, and for each time slice in one fMRI image, the process of obtaining the graph network time series corresponding to the fMRI image sample specifically includes:
s21: for a time slice, dividing a human brain into a plurality of interested areas according to a brain area division template; taking each interested region as a vertex to obtain a vertex set;
S22: the correlation among all vertexes of the vertex set is used as an edge, the correlation size between the vertexes is checked based on a k-s verification method to be used as the strength of the edge, and the edge set is obtained;
S23: constructing the undirected graph of the time slice according to the vertex set and the edge set;
S24: reselecting a time slice, repeatedly executing the steps S21-S24 to obtain an undirected graph of each time slice in the fMRI image, and obtaining a graph network time sequence corresponding to the fMRI image sample according to all the undirected graphs. Wherein the content of the first and second substances,is the number of fMRI time points,represents the firstA map constructed from time slices.
In a specific implementation, for each time slice in each fMRI image sample, the human brain is divided into N regions of interest according to a brain region division template, such as an AAL template, a Brainnetome template, and the like. In the scheme, an AAL template is adopted for division, the template divides a human brain into 116 interested areas, wherein 90 interested areas are brain areas, only 90 interested areas of the brain areas are selected in the scheme, and each interested area is used as a vertex to obtain a vertex set.
More specifically, in the step S2, the vertex set is represented asWhereinRepresents the firstA region of interest (ROI) is formed,is the number of regions of interest; edge set by adjacency matrixIt is shown that, among others,Nthe number of vertices is represented as a function of,is a vertexThe strength of the middle edge; in particular, according to the region of interestAnd the region of interestObtained by verifying the k-s verification method of the BOLD signalp-valueValue as vertexThe intensity of the edge between, k-s verification method can be used to verify whether the data in the two regions of interest obey the same distribution ifp-valueThe smaller the value, the smaller the correlation between the two regions of interest; the above-mentionedp-valueThe calculation process of the value is specifically as follows:
setting region of interestHas a BOLD signal ofRegion of interestHas a BOLD signal ofWhereinAre respectively the region of interestAnd the region of interestThe total number of BOLD signals of the two regions of interest is(ii) a The region of interestThe BOLD signals are sorted from small to large and renumberedThe sorted BOLD signals:obtaining non-descending order interested regionBOLD signal of (a):;
wherein the content of the first and second substances,is a region of interestIn (C) is less than or equal toThe number of BOLD signals of (a); obtaining the region of interest by the same methodEmpirical distribution function of:
Wherein the content of the first and second substances,is a region of interestIs less than or equal toThe number of BOLD signals of (a);
Wherein the content of the first and second substances,is a region of interestEmpirical distribution of BOLD signalsOf interest regionEmpirical distribution of BOLD signalsThe maximum value of the absolute value of the difference, and finally, the region of interest is calculatedAnd a region of interestK-s verification of BOLD signalsp-value value:
Where Z is the verification statistic and e is a natural constant.
Example 2
More specifically, on the basis of embodiment 1, a graph convolution neural network-time domain convolution neural network model is constructed in step S3, and in the model building process, the design mainly focuses on how to fuse the spatial dimension information and the time dimension information. In this embodiment, the graph features are first extracted by using a graph convolution network, and the graph features are input into a time domain convolution neural network, so as to obtain a final classification result. The step S3 specifically includes the following steps:
s31: respectively constructing a graph convolution neural network and a time domain convolution neural network, and forming the graph convolution neural network and the time domain convolution neural network into a graph convolution neural network-time domain convolution neural network model;
s32: taking one part of the graph network time sequence as a training set, and taking the rest part as a verification set;
s33: training a graph convolution neural network-time domain convolution neural network model by using a training set;
s34: in the training process, the graph convolution neural network-time domain convolution neural network model is verified through a verification set, and the parameters with the highest accuracy in the verification set are used as the parameters of the graph convolution neural network-time domain convolution neural network model to complete the training of the graph convolution neural network-time domain convolution neural network model;
in the training process, the graph characteristics of the graph network time sequence are extracted by the constructed graph convolution neural network, and the graph characteristics are input into the time domain convolution neural network to obtain a classification result.
More specifically, in the step S2, the BOLD signal of the region of interest is extractedThe average value and the standard deviation of the vertex attribute matrix are used as the characteristics of the vertex to obtain a vertex attribute matrix; in the step S3, the graph convolution neural network structure designed in this embodiment includes, as shown in fig. 3, a plurality of convolution pooling units, a full connection layer and a softmax classifier; the convolution pooling unit comprises a graph convolution layer, a self-attention graph pooling layer and a readout layer; setting graph network time sequence of graph convolution neural network input to contain vertex attribute matrixAnd adjacency matrixWherein, in the step (A),is the number of the vertices,is the number of vertex attributes; the operation of the graph convolution layer is specifically as follows:
wherein the content of the first and second substances,is thatAn order identity matrix;is a diagonal matrix, representing the degrees of each vertex,,representative matrixThe elements of row i and column j,representative matrixThe element of the ith row and ith column,is the firstNode embedding of layer if the node of layer 0 is characterized byThen, then,Is a learnable weight parameter;
the self-attention-seeking pooling layer needs to obtain the degree of importance of each layer of nodes, called self-attention of the nodes, and then before ranking the attention score weightskIs reserved to formTop-KA node; first calculate the self-attention scoreWherein N is the number of nodes:
whereinIs a learnable self-attention weight; the above equation is ten-phase with the operation of the graph volume layerSimilarly, the graph convolution layer is obtained by embedding nodes in the next layer, and the above formula is obtained by self-attention scores of the nodes in the layer, and the nodes are selected by adopting a node selection mode according to the self-attention scoresTop-KThe node, which retains a part of the input graph network time sequence, specifically is:
wherein the content of the first and second substances,an index representing a reservation node;presentation selectionBefore rankingThe node of (2); pooling ratioRepresenting the percentage of nodes to be retained, before deriving the self-attention valueLarge node index, then Masking operation is performed:
wherein the content of the first and second substances,indicating node embedding with reserved index mask,to representThe attention score corresponding to the node is retained,which means that the multiplication is performed in bits,an adjacency matrix representing the reserved nodes is shown,,a node embedding and adjacency matrix representing outputs from the attention pooling layer;
the readout layer aggregates the node features to form a fixed-size representation, resulting in a high-dimensional representation of the graph, and the readout layer outputs are specifically characterized by:
wherein, the first and the second end of the pipe are connected with each other,Nthe number of the nodes is represented as,denotes the l th layeriEmbedding nodes of each node, | | represents splicing operation of the features, and the read-out layer is actually a global average pooling layer and a global maximum pooling layer to obtain splicing of the features;
in order to realize the reconstruction output of data, the forward propagation process of the full connection layer is as follows:
,are respectively the firstThe learnable weight matrix and the learnable bias for the fully connected one of the layers,andrespectively representAnd finally, obtaining a final classification result through a softmax classifier by the number of neurons of the layer full-junction layer and the number of neurons of the l +1 th layer full-junction layer:
wherein the content of the first and second substances,,is the firstThe number of the neurons of the layer full-connection layer,is the number of categories; the graph convolution neural network obtains a plurality of graphs obtained by self-attention graph pooling layers, obtains high-dimensional feature representations of different hierarchical graphs through reading layers, adds the high-dimensional features to obtain a final high-dimensional feature representation, reconstructs the high-dimensional features through a full connection layer, uses the reconstructed features as input of a time domain convolution neural network, and finally obtains a classification result of an input graph through a softmax classifier.
More specifically, in the step S3, the time-domain convolutional neural network structure in this embodiment is as shown in fig. 4, an input layer of the time-domain convolutional neural network structure is connected to a fully connected layer of the graph convolutional neural network, and the time-domain convolutional (TCN) layer is processed by a plurality of time-domain convolutional (TCN) layers, and output by an output layer to a softmax classifier after being processed by an expansion layer, where each TCN layer transforms the dimension of its input to be consistent with the dimension of its output by a one-dimensional fully convolutional structure, and its forward propagation process is as follows:
sequence data is formed by splicing output vectors of assumed full connection layersWhereinIs the length of the time slice or slices,the number of neurons in the full connection layer; will be provided withInputting the time slice into a TCN layer, outputting and expanding the time slice into a one-dimensional vector through an expansion layer after passing through a plurality of TCN layers, and finally classifying the time slice through a softmax classifier to obtain a classification result of the time slice. In order to reduce the number of parameters of the model, the graph convolution neural network in the model of the present embodiment adopts a design of sharing weights.
More specifically, in the step S3, the TCN layer of the time domain convolutional neural network is composed of a causal convolution and an expansion convolution, wherein:
in causal convolution, the element of the output sequence depends only on the element before the element in the input sequence, future data cannot be seen, and the method is a strict sequence constraint model; the time sequence data is the previous layer at a certain momentIs only dependent onDepends on the next layerThe values at and before time, namely:
wherein the content of the first and second substances,the output representing the time T of the causal convolution,a feature vector representing layer i time 1 to time T; the expansion convolution refers to performing convolution operation by using a discontinuous neuron with the same size as a convolution kernel; the expansion convolution has a expansion coefficientdThe method is used for controlling the discontinuity degree of neurons participating in convolution operation, and the calculation formula of the dilation convolution is as follows:
wherein the content of the first and second substances,dthe coefficient of expansion is expressed as a function of,which represents the size of the convolution kernel,weight of the i-th term of the convolution kernel wheneAt 1, the dilated convolution degenerates to the normal convolution by controleThereby enlarging the receptive field under the premise of unchanged calculated amount.
More specifically, in the graph convolution neural network-time domain convolution neural network model constructed in step S3, the loss function is composed of three parts, which are respectively node classification loss, time segment classification loss, and final classification loss, and the loss function is specifically expressed as:
wherein the content of the first and second substances,is the node classification loss of the jth node at the ith time point,,in this scheme, a self-attention pooling layer is applied in the graph convolution neural network, so that only the final for each graph is retainedTop-KAnd the loss function is also only calculatedTop-KClassification loss of nodes;is the firstiThe time slice classification of a time point is lost,,the number of time points is the classification loss of the graph convolution neural network;a final classification loss, hyperparameter, for a time-domain convolutional neural networkThe influence of the classification loss of the control node, the classification loss of the time segment and the final classification loss respectively hasAnd is(ii) a All classification loss functions use cross-entropy loss functions, which are specifically expressed as:
represents the sample ofjThe true probability value of the seed class,representing the sample obtained from the modeljPredicted probability values for the species classes.
In the specific implementation process, a loss function consisting of node classification loss, time segment classification loss and final classification loss is provided, so that the classification capability of each part of modules of the model and the classification capability of the final model are improved.
Example 3
More specifically, in step S3, the convolutional neural network-time domain convolutional neural network model may be tested, in the testing stage, the fMRI image is sampled in a sliding window manner, then all the sampling samples construct a convolutional network time sequence, the convolutional neural network-time domain convolutional neural network model provided in the present solution is input, and the obtained classification results of all the sampling samples are obtained in a simple voting manner to obtain the final classification result. In particular, assume a test fMRI image sampleThe length of the sampling segment is k frames, the sliding step length is m, and a sampling samples are finally obtained,,,In which. Inputting the prediction classification result into a model to obtain corresponding prediction classification results respectivelyWhereinThe final simple voting results in classification。
Hereinafter, taking Alzheimer's Disease (AD) as an example, using fMRI image data from the american large Alzheimer's Disease public database ADNI (Alzheimer's Disease Neuroimaging Initiative), 250 fMRI image data (121 ADs, 129 control groups) from 60 subjects (25 ADs, 35 control groups) are collected in total, that is, one subject may have a plurality of fMRI image data, and the above-described data are inputted as experimental data of the present invention into the model in the present application to evaluate the effect of the model and compare performance differences. As described above, training data is input into the model, and the model is then tested for performance using the test data set. To reduce the impact of dataset partitioning on the experimental results, this example employed five cross-validation methods to evaluate the performance of the model. To avoid data leakage, the data set is partitioned according to the subjects, i.e., multiple fMRI images of one subject only appear in the training set or the test set at the same time.
1. Parameter setting
During training, the Batch _ size is 32, the epochs are 200, parameter updating is carried out by adopting an Adam gradient descent method, and the learning rate is0.001, the learning rate is exponentially reduced along with the change of time,. And dividing part of data in the training set as a verification set, wherein in the training process, the parameter with the highest accuracy of the model in the verification set is used as the final parameter of the model. Time window m =10 when tested.
2. Results of the experiment
Table 1 shows the effect of different sample lengths on the model results, which gave the best generalization performance for a sample frame length of 64.
TABLE 1
Sampling frame length | Rate of accuracy | Standard deviation of |
16 | 0.68 | 0.08 |
32 | 0.62 | 0.16 |
48 | 0.69 | 0.07 |
64 | 0.72 | 0.10 |
Table 2 shows the loss function for a sample length of 64Influence on model results in different values:
TABLE 2
As can be seen from Table 2, the loss function designed by the present invention is effective compared to the loss function using only the final loss () And loss of time slice and final loss () The classification performance of the model trained by the loss function is improved to a certain extent. Finally, in order to verify the effectiveness of the method for constructing functional connections between the regions of interest based on the k-s verification method proposed in the present application, the traditional method for constructing functional connections based on pearson correlation is used as a comparison experiment, i.e. for each time point, the region of interest is constructediAnd a region of interestjThe connection strength of (2) is:
wherein the content of the first and second substances,andrepresenting a region of interest for which Pearson correlation analysis is to be performediAnd a region of interestjOf the BOLD signal,Respectively representing the interested regions at the t-th time pointiAnd a region of interestjAverage value of the BOLD signal of (1). In a graph network time sequence constructed by the method, the edge weight of each graph is the same, and finally the five-fold cross validation obtained by the method has the average accuracy rate of 60% and the standard deviation of 5%. Compared with the method, the method provided by the invention has the advantages that the precision is improved by about 12 percent, and better effect is obtained, which shows that the method for constructing the time sequence of the graph network provided by the invention is effective. The method can be interpreted as that the method for constructing the network time sequence of the map based on the k-s test can effectively reflect the dynamic change of the correlation relationship between the brain region functions presented along with the time change in the neurophysiologic process, and the traditional method based on the Pearson correlation is based on the construction of the brain function connection at all time points and cannot express the dynamic change mode.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.
Claims (10)
1. A medical image classification method based on a graph network time sequence is characterized by comprising the following steps:
s1: acquiring an original fMRI image, preprocessing and sampling to obtain an fMRI image sample;
s2: constructing a graph network time sequence capable of showing dynamic changes of functional connection among brain partitions based on a k-s verification method, and processing the fMRI image samples to obtain a graph network time sequence corresponding to each fMRI image sample;
s3: constructing a graph convolution neural network-time domain convolution neural network model, and training and verifying the graph convolution neural network-time domain convolution neural network model by utilizing a graph network time sequence;
s4: and inputting the fMRI image to be classified into the graph convolution neural network-time domain convolution neural network model which completes training and verification, so as to realize classification of the medical image.
2. The method for classifying medical images based on graph network time series according to claim 1, wherein in step S1, raw fMRI images are preprocessed by DPARSF software.
3. The method for classifying medical images based on graph network time series according to claim 1, wherein in the step S1, the process of sampling the preprocessed fMRI image specifically comprises: assuming a time slice length of k for the sample, the calculation selects a start frame ofFinally, sampling to obtain a sample segment ofAnd repeating the steps to obtain a plurality of sample fragments to form the fMRI image sample.
4. The method according to claim 1, wherein in step S2, the fMRI image samples are composed of a plurality of time slices of each fMRI image, and for each time slice in one fMRI image, the process of obtaining the atlas network time series corresponding to the fMRI image sample specifically comprises:
s21: for a time slice, dividing a human brain into a plurality of interested areas according to a brain area division template; taking each interested area as a vertex to obtain a vertex set;
s22: taking the correlation among the vertexes of the vertex set as an edge, and checking the correlation between the vertexes as the strength of the edge based on a k-s verification method to obtain an edge set;
s23: constructing an undirected graph of the time slice according to the vertex set and the edge set;
s24: and reselecting a time slice, repeatedly executing the steps S21-S24 to obtain an undirected graph of each time slice in the fMRI image, and obtaining the graph network time sequence corresponding to the fMRI image sample according to all the undirected graphs.
5. The method according to claim 4, wherein in step S2, the vertex set is represented asWhereinRepresents the firstThe region of interest is determined by the area of interest,is the number of regions of interest; edge set by adjacency matrixIt is shown that, among others,Nthe number of vertices is represented as a function of,is a vertexThe strength of the middle edge; in particular, the BOLD signal according to the region of interest and the region of interestObtained by verifying the k-s verification method of the BOLD signalp-valueValue as vertexThe intensity of the edge between, k-s, verification method can be used to verify whether the data in the two regions of interest obey the same distribution ifp-valueThe smaller the value, the smaller the correlation between the two regions of interest; the describedp-valueThe calculation process of the value is specifically as follows:
setting region of interestHas a BOLD signal ofRegion of interestHas a BOLD signal ofWhereinAre respectively the region of interestAnd the region of interestThe total number of BOLD signals of the two interested areas(ii) a The region of interestB of (A)The OLD signals are sorted from small to large and then renumberedThe sorted BOLD signals:obtaining non-descending order interested regionBOLD signal of (a):;
wherein, the first and the second end of the pipe are connected with each other,is a region of interestIs less than or equal toThe number of BOLD signals; obtaining the region of interest by the same methodEmpirical distribution function of:
Wherein the content of the first and second substances,is a region of interestIs less than or equal toThe number of BOLD signals of (a);
Wherein the content of the first and second substances,is a region of interestEmpirical distribution of BOLD signalsOf interest regionEmpirical distribution of BOLD signalsThe maximum value of the absolute value of the difference, and finally, the region of interest is calculatedAnd a region of interestK-s verification of BOLD signalsp-value value:
Where Z is the verification statistic and e is a natural constant.
6. The method according to claim 4, wherein the step S3 specifically comprises the following steps:
s31: respectively constructing a graph convolution neural network and a time domain convolution neural network, and forming the graph convolution neural network and the time domain convolution neural network into a graph convolution neural network-time domain convolution neural network model;
s32: taking one part of the graph network time sequence as a training set, and taking the rest part of the graph network time sequence as a verification set;
s33: training the graph convolution neural network-time domain convolution neural network model by using a training set;
s34: in the training process, the graph convolution neural network-time domain convolution neural network model is verified through a verification set, and the parameters with the highest accuracy in the verification set are used as the parameters of the graph convolution neural network-time domain convolution neural network model to complete the training of the graph convolution neural network-time domain convolution neural network model;
in the training process, the graph characteristics of the graph network time sequence are extracted by the constructed graph convolution neural network, and the graph characteristics are input into the time domain convolution neural network to obtain a classification result.
7. The method according to claim 6, wherein in step S2, the mean and standard deviation of BOLD signals of the region of interest are extracted as the features of its vertex to obtain a vertex attribute matrix; in the step S3, the graph convolution neural network comprises a plurality of convolution pooling units, a full connection layer and a softmax classifier; the convolution pooling unit comprises a graph convolution layer, a self-attention graph pooling layer and a readout layer; setting graph network time sequence of graph convolution neural network input to contain vertex attribute matrixAnd adjacency matrixWherein, in the process,is the number of the vertices,is the number of vertex attributes; the operation of the graph convolution layer is specifically as follows:
wherein the content of the first and second substances,is thatAn order identity matrix;is a diagonal matrix, representing the degrees of each vertex,,representative matrixThe elements of row i and column j,representative matrixThe elements of row i and column i,is the firstNode embedding of layer if node of layer 0 is characterized byThen, then,Is a learnable weight parameter;
the self-attention-seeking pooling layer needs to obtain the degree of importance of each layer of nodes, called self-attention of the nodes, and then the self-attention of the nodes is given toAttention score weighting before rankingkIs reserved to formTop-KA node; first calculate the self-attention scoreWherein N is the number of nodes:
whereinIs a learnable self-attention weight; selecting in a node selection mode according to the self-attention scoreTop- KThe node, which retains a part of the input graph network time sequence, specifically is:
wherein, the first and the second end of the pipe are connected with each other,an index representing a reservation node;presentation selectionBefore rankingA node of (2); pooling ratioRepresenting the percentage of nodes to be retained, before deriving the self-attention valueLarge node index, then Masking operation is performed:
wherein the content of the first and second substances,indicating node embedding with reserved index mask,indicating the attention score corresponding to the retention node,which means that the multiplication is performed in bits,an adjacency matrix representing the reserved nodes is shown,,a node embedding and adjacency matrix representing outputs from the attention pooling layer;
the readout layer aggregates the node features to form a representation of a fixed size to obtain a high-dimensional representation of the graph, and the output of the readout layer is specifically characterized by:
wherein, the first and the second end of the pipe are connected with each other,Nthe number of the nodes is represented as,denotes the l th layeriEmbedding nodes of each node, wherein | | | represents the splicing operation of the features, and the read-out layer is actually a global average pooling layer and a global maximum pooling layer to obtain the splicing of the features;
in order to realize the reconstruction output of data, the forward propagation process of the full connection layer is as follows:
,are respectively the firstThe learnable weight matrix and the learnable bias for the fully connected one of the layers,andrespectively representAnd finally, obtaining a final classification result through a softmax classifier by the number of neurons of the layer full-junction layer and the number of neurons of the l +1 th layer full-junction layer:
wherein the content of the first and second substances,,is the number of neurons of the l-th fully-connected layer,is the number of categories; the graph convolution neural network obtains a plurality of graphs obtained by self-attention graph pooling layers, obtains high-dimensional feature representations of different hierarchical graphs through reading layers, adds the high-dimensional features to obtain a final high-dimensional feature representation, reconstructs the high-dimensional features through a full connection layer, uses the reconstructed features as input of a time domain convolution neural network, and finally obtains a classification result of an input graph through a softmax classifier.
8. The method for classifying medical images based on graph network time series according to claim 7, wherein in the step S3, the input layer of the time domain convolutional neural network is connected to the graph convolutional neural network full link layer, processed by a plurality of TCN layers, and output by the output layer to the softmax classifier after being processed by the expansion layer, wherein each TCN layer transforms the dimension size of its input into the same as the dimension size of its output by a one-dimensional full convolutional structure, and its forward propagation process is as follows:
sequence data is formed by splicing output vectors of assumed full connection layersIn whichIs the length of the time slice or slices,is fully connectedThe number of neurons connected to the layer; will be provided withInputting the time slice into a TCN layer, outputting and expanding the time slice into a one-dimensional vector through an expansion layer after passing through a plurality of TCN layers, and finally classifying the time slice through a softmax classifier to obtain a classification result of the time slice.
9. The method for classifying medical images based on graph network time series according to claim 8, wherein in said step S3, the TCN layer of the time domain convolution neural network is composed of causal convolution and dilation convolution, wherein:
in causal convolution, the element of the output sequence depends only on the element preceding it in the input sequence, which is one time earlier in the sequence for time series dataIs dependent only on the next layerThe values at and before time, namely:
wherein the content of the first and second substances,the output representing the time T of the causal convolution,a feature vector representing the layer l from time 1 to time T; the expansion convolution refers to performing convolution operation by using a discontinuous neuron with the same size as a convolution kernel; the expansion convolution has a expansion coefficientdThe method is used for controlling the discontinuity degree of neurons participating in convolution operation, and the calculation formula of the dilation convolution is as follows:
wherein the content of the first and second substances,dthe coefficient of expansion is expressed as a function of,which represents the size of the convolution kernel,weight of the i-th term of the convolution kernel wheneAt 1, the dilated convolution degenerates to the normal convolution, controlled byeSo as to enlarge the receptive field under the premise of unchanged calculated amount.
10. The method according to claim 9, wherein in the atlas network-temporal convolutional neural network model constructed in the step S3, the loss function is composed of three parts, which are node classification loss, time segment classification loss and final classification loss, and the loss function is specifically expressed as:
wherein the content of the first and second substances,is the node classification loss of the jth node at the ith time point,,,is the loss of time slice classification at the ith time point,,the number of time points is the classification loss of the graph convolution neural network;a classification loss, hyper-parameter, for the final time-domain convolutional neural networkThe influence of the classification loss of the control node, the classification loss of the time segment and the final classification loss respectively hasAnd is(ii) a All classification loss functions use cross-entropy loss functions, which are specifically expressed as:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210814372.2A CN115222688B (en) | 2022-07-12 | 2022-07-12 | Medical image classification method based on graph network time sequence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210814372.2A CN115222688B (en) | 2022-07-12 | 2022-07-12 | Medical image classification method based on graph network time sequence |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115222688A true CN115222688A (en) | 2022-10-21 |
CN115222688B CN115222688B (en) | 2023-01-10 |
Family
ID=83612470
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210814372.2A Active CN115222688B (en) | 2022-07-12 | 2022-07-12 | Medical image classification method based on graph network time sequence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115222688B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115909016A (en) * | 2023-03-10 | 2023-04-04 | 同心智医科技(北京)有限公司 | System, method, electronic device, and medium for analyzing fMRI image based on GCN |
CN116030308A (en) * | 2023-02-17 | 2023-04-28 | 齐鲁工业大学(山东省科学院) | Multi-mode medical image classification method and system based on graph convolution neural network |
CN117435995A (en) * | 2023-12-20 | 2024-01-23 | 福建理工大学 | Biological medicine classification method based on residual map network |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102855491A (en) * | 2012-07-26 | 2013-01-02 | 中国科学院自动化研究所 | Brain function magnetic resonance image classification method based on network centrality |
CN108364006A (en) * | 2018-01-17 | 2018-08-03 | 超凡影像科技股份有限公司 | Medical Images Classification device and its construction method based on multi-mode deep learning |
CN110720906A (en) * | 2019-09-25 | 2020-01-24 | 上海联影智能医疗科技有限公司 | Brain image processing method, computer device, and readable storage medium |
CN111667459A (en) * | 2020-04-30 | 2020-09-15 | 杭州深睿博联科技有限公司 | Medical sign detection method, system, terminal and storage medium based on 3D variable convolution and time sequence feature fusion |
WO2021001238A1 (en) * | 2019-07-01 | 2021-01-07 | Koninklijke Philips N.V. | Fmri task settings with machine learning |
CN112766332A (en) * | 2021-01-08 | 2021-05-07 | 广东中科天机医疗装备有限公司 | Medical image detection model training method, medical image detection method and device |
CN113080847A (en) * | 2021-03-17 | 2021-07-09 | 天津大学 | Device for diagnosing mild cognitive impairment based on bidirectional long-short term memory model of graph |
CN113592836A (en) * | 2021-08-05 | 2021-11-02 | 东南大学 | Deep multi-modal graph convolution brain graph classification method |
CN114241240A (en) * | 2021-12-15 | 2022-03-25 | 中国科学院深圳先进技术研究院 | Method and device for classifying brain images, electronic equipment and storage medium |
US20220122250A1 (en) * | 2020-10-19 | 2022-04-21 | Northwestern University | Brain feature prediction using geometric deep learning on graph representations of medical image data |
-
2022
- 2022-07-12 CN CN202210814372.2A patent/CN115222688B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102855491A (en) * | 2012-07-26 | 2013-01-02 | 中国科学院自动化研究所 | Brain function magnetic resonance image classification method based on network centrality |
CN108364006A (en) * | 2018-01-17 | 2018-08-03 | 超凡影像科技股份有限公司 | Medical Images Classification device and its construction method based on multi-mode deep learning |
WO2021001238A1 (en) * | 2019-07-01 | 2021-01-07 | Koninklijke Philips N.V. | Fmri task settings with machine learning |
CN110720906A (en) * | 2019-09-25 | 2020-01-24 | 上海联影智能医疗科技有限公司 | Brain image processing method, computer device, and readable storage medium |
CN111667459A (en) * | 2020-04-30 | 2020-09-15 | 杭州深睿博联科技有限公司 | Medical sign detection method, system, terminal and storage medium based on 3D variable convolution and time sequence feature fusion |
US20220122250A1 (en) * | 2020-10-19 | 2022-04-21 | Northwestern University | Brain feature prediction using geometric deep learning on graph representations of medical image data |
CN112766332A (en) * | 2021-01-08 | 2021-05-07 | 广东中科天机医疗装备有限公司 | Medical image detection model training method, medical image detection method and device |
CN113080847A (en) * | 2021-03-17 | 2021-07-09 | 天津大学 | Device for diagnosing mild cognitive impairment based on bidirectional long-short term memory model of graph |
CN113592836A (en) * | 2021-08-05 | 2021-11-02 | 东南大学 | Deep multi-modal graph convolution brain graph classification method |
CN114241240A (en) * | 2021-12-15 | 2022-03-25 | 中国科学院深圳先进技术研究院 | Method and device for classifying brain images, electronic equipment and storage medium |
Non-Patent Citations (4)
Title |
---|
AN ZENG等: "Discovery of Genetic Biomarkers for Alzheimer’s Disease Using Adaptive Convolutional Neural Networks Ensemble and Genome‑Wide Association Studies", 《INTERDISCIPLINARY SCIENCES: COMPUTATIONAL LIFE SCIENCES》 * |
XIAOXIAO LI等: "BrainGNN: Interpretable Brain Graph Neural Network for fMRI Analysis", 《MEDICAL IMAGE ANALYSIS》 * |
唐朝生等: "医学图像深度学习技术:从卷积到图卷积的发展", 《中国图象图形学报》 * |
曾安等: "基于3D卷积神经网络-感兴区域的阿尔茨海默症辅助诊断模型", 《生物医学工程研究》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116030308A (en) * | 2023-02-17 | 2023-04-28 | 齐鲁工业大学(山东省科学院) | Multi-mode medical image classification method and system based on graph convolution neural network |
CN115909016A (en) * | 2023-03-10 | 2023-04-04 | 同心智医科技(北京)有限公司 | System, method, electronic device, and medium for analyzing fMRI image based on GCN |
CN117435995A (en) * | 2023-12-20 | 2024-01-23 | 福建理工大学 | Biological medicine classification method based on residual map network |
CN117435995B (en) * | 2023-12-20 | 2024-03-19 | 福建理工大学 | Biological medicine classification method based on residual map network |
Also Published As
Publication number | Publication date |
---|---|
CN115222688B (en) | 2023-01-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115222688B (en) | Medical image classification method based on graph network time sequence | |
Niu et al. | Multichannel deep attention neural networks for the classification of autism spectrum disorder using neuroimaging and personal characteristic data | |
CN113040715B (en) | Human brain function network classification method based on convolutional neural network | |
Cui et al. | Automatic sleep stage classification based on convolutional neural network and fine-grained segments | |
CN109242860B (en) | Brain tumor image segmentation method based on deep learning and weight space integration | |
CN109345538A (en) | A kind of Segmentation Method of Retinal Blood Vessels based on convolutional neural networks | |
Hong et al. | Brain age prediction of children using routine brain MR images via deep learning | |
CN112037179B (en) | Method, system and equipment for generating brain disease diagnosis model | |
Liu et al. | An enhanced multi-modal brain graph network for classifying neuropsychiatric disorders | |
Torres et al. | Evaluation of interpretability for deep learning algorithms in EEG emotion recognition: A case study in autism | |
CN110148145A (en) | A kind of image object area extracting method and application merging boundary information | |
Wang et al. | Classification of structural MRI images in Adhd using 3D fractal dimension complexity map | |
Bayram et al. | Deep learning methods for autism spectrum disorder diagnosis based on fMRI images | |
Toğaçar et al. | Use of dominant activations obtained by processing OCT images with the CNNs and slime mold method in retinal disease detection | |
CN115272295A (en) | Dynamic brain function network analysis method and system based on time domain-space domain combined state | |
Qiang et al. | A deep learning method for autism spectrum disorder identification based on interactions of hierarchical brain networks | |
CN112036298A (en) | Cell detection method based on double-segment block convolutional neural network | |
CN115474939A (en) | Autism spectrum disorder recognition model based on deep expansion neural network | |
Kong et al. | Data enhancement based on M2-Unet for liver segmentation in Computed Tomography | |
Jung et al. | Inter-regional high-level relation learning from functional connectivity via self-supervision | |
Seshadri Ramana et al. | Deep convolution neural networks learned image classification for early cancer detection using lightweight | |
Mareeswari et al. | A survey: Early detection of Alzheimer’s disease using different techniques | |
CN115937590A (en) | Skin disease image classification method with CNN and Transformer fused in parallel | |
Bonà et al. | Vocal tract segmentation of dynamic speech MRI images based on deep learning for neurodegenerative disease application | |
Zhou et al. | Spatial-Temporal Graph Convolutional Network for Insomnia Classification via Brain Functional Connectivity Imaging of rs-fMRI |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |