CN115409743A - Model construction method for brain magnetic resonance image processing based on deep learning - Google Patents

Model construction method for brain magnetic resonance image processing based on deep learning Download PDF

Info

Publication number
CN115409743A
CN115409743A CN202211365286.4A CN202211365286A CN115409743A CN 115409743 A CN115409743 A CN 115409743A CN 202211365286 A CN202211365286 A CN 202211365286A CN 115409743 A CN115409743 A CN 115409743A
Authority
CN
China
Prior art keywords
magnetic resonance
resonance image
data
brain
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211365286.4A
Other languages
Chinese (zh)
Other versions
CN115409743B (en
Inventor
李奇
于潇洁
武岩
宋雨
高宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University of Science and Technology
Original Assignee
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University of Science and Technology filed Critical Changchun University of Science and Technology
Priority to CN202211365286.4A priority Critical patent/CN115409743B/en
Publication of CN115409743A publication Critical patent/CN115409743A/en
Application granted granted Critical
Publication of CN115409743B publication Critical patent/CN115409743B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/004Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
    • A61B5/0042Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part for the brain
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • General Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Neurology (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Image Analysis (AREA)

Abstract

A model construction method for brain magnetic resonance image processing based on deep learning relates to the technical field of brain magnetic resonance image processing, and solves the problem of designing a network model construction method with high resolution of output features, and the method comprises the following steps: a base dataset comprising structural and functional magnetic resonance images in respect of the brain; extracting data characteristics of a data set to obtain a data set for training, wherein the data set for training comprises texture characteristics of a structural magnetic resonance image, brain network characteristics of a functional magnetic resonance image and network attribute characteristics of the functional magnetic resonance image; constructing a framework for adding a coordinated attention mechanism; acquiring a target algorithm of the frame, wherein the target algorithm can self-correct data; and training the frame by adopting the data set for training to obtain a group self-correction coordination attention network model. The invention expands the effective field of view, and the output data has high resolution and more obvious characteristics.

Description

Model construction method for brain magnetic resonance image processing based on deep learning
Technical Field
The invention relates to the technical field of brain magnetic resonance image processing, in particular to a model construction method for brain magnetic resonance image processing based on deep learning.
Background
With the development of science and technology, people's health is more and more concerned, and research on human brain has never been stopped and is more and more emphasized. Because the model building and model processing have obvious advantages, the model is also generally adopted for analyzing and processing the brain data of the human brain, and the model is generally built through a Convolutional Neural Network (CNNs).
Disclosure of Invention
In order to solve the problems, the invention provides a model construction method for brain magnetic resonance image processing based on deep learning.
The technical scheme adopted by the invention for solving the technical problem is as follows:
the model construction method for brain magnetic resonance image processing based on deep learning comprises the following steps:
acquiring a basic data set comprising structural magnetic resonance images in respect of the brain and functional magnetic resonance images in respect of the brain;
extracting data characteristics of a data set to obtain a data set for training, wherein the data set for training comprises texture characteristics of a structural magnetic resonance image, brain network characteristics of a functional magnetic resonance image and network attribute characteristics of the functional magnetic resonance image;
constructing a framework for adding a coordinated attention mechanism;
acquiring a target algorithm of the frame, wherein the target algorithm can self-correct data;
and training the frame by adopting the data set for training to obtain a group self-correction coordination attention network model.
The invention has the beneficial effects that:
the method can construct a group self-correction coordinated attention network model for brain magnetic resonance image processing based on deep learning, is suitable for any 3D image data set, obviously expands a field of view region through internal communication, enhances the expression capability of the network on features in a basic data set, expands an effective field of view through the model constructed by the method, and finally outputs data with high resolution, more obvious features and more discriminative property.
Drawings
FIG. 1 is a general flow diagram of the present invention.
Fig. 2 is a structural view of the frame.
Fig. 3 is a structural diagram of a residual block.
FIG. 4 is a data processing process for convolutional layer coordination attention.
FIG. 5 is a flow chart of self-calibration.
Figure 6 is a report comparing the performance of the present invention with other studies.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings.
A model construction method for brain magnetic resonance image processing based on deep learning is disclosed as figure 1, and comprises the following steps:
acquiring a basic data set comprising structural magnetic resonance images in respect of the brain and functional magnetic resonance images in respect of the brain;
extracting data characteristics of a data set to obtain a data set for training, wherein the data set for training comprises texture characteristics of a structural magnetic resonance image, brain network characteristics of a functional magnetic resonance image and network attribute characteristics of the functional magnetic resonance image;
constructing a framework for adding a coordinated attention mechanism;
acquiring a target algorithm of the frame, wherein the target algorithm can self-correct data, namely the target algorithm can self-correct the data;
and training the frame by adopting the data set for training to obtain a self-correcting coordination attention network model, namely obtaining the model for brain magnetic resonance image processing based on deep learning.
The base dataset also includes a clinical scale, and the dataset for training includes quantified values of the scale. Clinical scales are in particular scales used clinically for assessing the severity of a disease, for example for assessing the severity of alzheimer's disease.
The group self-correction coordinated attention network model constructed by adopting the method comprises group convolution, self-correction convolution and a coordinated attention network.
Features are extracted for different modal data in the basic data set, wherein textural features are extracted for structural magnetic resonance imaging data, and brain network features and network attribute features are extracted for functional magnetic resonance imaging. Structural magnetic resonance imaging (srim) and functional magnetic resonance imaging (fMRI) data are image data and clinical scales (i.e., clinical assessment test scales that a patient needs to fill in/answer) are non-image data, including but not limited to clinical scales. Clinical scales include MMSE (mini mental state examination scale) and CDR (clinical dementia rating scale). The quantified value of the scale includes the total score of the test and may also include the score and total score of each test item in the scale.
The target algorithm for obtaining the frame comprises the following steps: and constructing a frame of the group self-correction coordinated attention network model, and acquiring a target algorithm in the frame. And finally inputting the texture features, the brain network features, the network attribute features and the quantized values of the scale into a group self-correction coordination attention network model for further operation. The overall framework is as shown in fig. 2, and is composed of 1 first convolutional layer (Cov layer), 1 first BN layer (batch normalization layer), 1 maximum pooling layer (MaxPool layer), 1 first active layer (ReLU layer), one second active layer, 16 residual blocks (ResBlock), 1 adaptive pooling layer (adaptive avgpool layer), 1 fully-connected layer (FC layer), and 1 final softmax (normalized exponential function) layer. The model with a plurality of output nodes can research multi-classification problems. The sequence is shown in the specification and figure 4, the model is in a series structure, and the output of the previous layer is the input of the next layer.
The convolutional layers can extract different features of the input, and more complex features can be extracted through more iterations of the convolutional layers. To reduce redundant features, the introduction of errors into the pooling layer is avoided. The gradient explosion and gradient disappearance problems are prevented by the normalization layer. And the model adopts the residual block in the classical ResNet network, and the effect of the model can be steadily improved by increasing the number of deep layers by skipping the layers with poor effect. Finally, the full connection layer is used for converting the feature graph into one-dimensional features, and robustness is improved. The first active layer and the second active layer both adopt the modified linear unit as a nonlinear active function. And performing multi-scale feature learning by adopting a maximum pooling layer. The 16 residual blocks may have the same structure, and the 16 residual blocks form a residual network ResNet, where the residual network includes 4 residual units, the first residual unit includes 3 residual blocks, the second residual unit includes 4 residual blocks, the third residual unit includes 6 residual blocks, and the fourth residual unit includes 3 residual blocks. The structure of the residual block is shown in fig. 3, and includes a second convolution layer, a Group self-corrected convolution layer, and a third convolution layer, and in fig. 3, the Group self-corrected coordinated convolution is a Group self-corrected coordination convolution. Each residual block has 3 convolutional layers, including group self-correcting convolutions (as shown in fig. 4), the three convolutions are connected in series, and the residual block of ResNet follows the convolutional layer design of VGG by stacking a plurality of small convolution kernels instead of a large convolution kernel to reduce the required parameters. The ReLU activation function is used to save the computational effort of the whole process and to alleviate the occurrence of the overfitting problem. The bank self-correcting convolutional layer carries a coordinated attention mechanism/module. The group self-correction data group is directed to a plurality of groups, not referred to as intra-group or extra-group, and the operation in each group is the same, and the data input to the group is divided into two parts for subsequent operation. The intra-group can self-correct the data within the group. The data processing procedure for the group self-correcting convolutional layer is shown in fig. 4, where Sigmoid represents the name of an active function.
The target algorithm is detailed below:
the convolution transform of the former filter is calibrated by using the low-dimensional mode after the latter filter transform. According to the invention, information communication between the self-calibration convolution and the filter is realized through multiple groups of heterogeneous convolutions, so that the exact position of each target feature is more accurately positioned, and a convolution view field is increased while a more discriminative feature representation is generated. And then uniformly inputting the features after the group self-correction processing into the attention model. The method of group self-correction is as follows:
as shown in fig. 5, the group self-correction method solves the problem of field limitation, and corrects the original spatial image through multiple groups of self-calibration down-sampling operations. Each set of self-correcting convolution layers, namely each set of self-correcting convolution layers, comprises four heterogeneous filters, wherein the four heterogeneous filters are respectively K 1 、K 2 、K 3 、K 4 Dividing data X input to the group self-correction convolutional layer into X 1 And X 2 Two parts. At X 1 In partial data, original space information is retained, and simple characteristic direct mapping is carried out, namely Y 1 =F 1 (X 1 ) In which F is 1 (X 1 )=X 1 *K 1 ,F 1 Representing a first convolution operation, in this way preventing feature loss. In the second part X 2 In (3), use the remaining three filter pairs X 2 Down-sampling to obtain Y 2 Self-calibration operation is achieved from the whole body while at Y 2 A recalibration operation is performed internally. Y is 1 Series Y 2 Obtaining Y, then coordinating attention of Y, namely adding a coordinating attention mechanism in the frame, thereby achieving the interaction of remote context information and finally obtaining the data after feature extraction
Figure 254872DEST_PATH_IMAGE001
Self-calibrating convolution makes the connection between each feature map more intuitive through an average pooling operation, and in this way, local context information of the features is captured, and meanwhile, the fitting phenomenon is reduced. Thus will give X of input 2 And (3) performing an average pooling operation of the three-dimensional space:
M 2 =AvgPool3d(X 2 ) (1)
wherein AvgPool3d () represents the three-dimensional average pooling, M 2 Is X 2 The data generated after performing the averaging pooling operation.
Based on K 2 Filter pair averaging pooled feature M 2 Carrying out feature mapping:
Figure 733258DEST_PATH_IMAGE002
(2)
F 2 denoted as a second convolution operation, is performed,
Figure 799565DEST_PATH_IMAGE003
represents M 2 The result of performing the second convolution operation.
X 2 And
Figure 665890DEST_PATH_IMAGE003
and activating by using a Sigmoid activation function after normalization:
Figure 801337DEST_PATH_IMAGE004
(3)
wherein S (-) denotes a Sigmoid activation function.
Produced by the above-mentioned
Figure 411309DEST_PATH_IMAGE003
AndX 2 are activated by Sigmoid together
Figure 174866DEST_PATH_IMAGE005
Figure 71147DEST_PATH_IMAGE005
Represent
Figure 287365DEST_PATH_IMAGE003
And withX 2 Together with the result of Sigmoid activation.
Use of
Figure 435449DEST_PATH_IMAGE003
And performing further calibration work as the calibration weight of the element-wise multiplication corresponding to the co-located element:
Figure 53512DEST_PATH_IMAGE006
(4)
F 3 representing a third convolution operation, i.e. first pair X 2 Carrying out F 3 A third convolution operation, and then comparing the result with
Figure 261640DEST_PATH_IMAGE005
Carrying out corresponding multiplication operation of the same-position elements to obtain calibrated data
Figure 840520DEST_PATH_IMAGE007
The calibrated data
Figure 526716DEST_PATH_IMAGE007
Performing a feature transformation may be expressed as:
Figure 999286DEST_PATH_IMAGE008
(5)
F 4 denotes a fourth convolution operation, Y 2 Indicating the result of performing the fourth convolution operation.
Output characteristic Y after calibration 2 Features relating to original spatial contextY 1 Further combining, and coordinating attention with the characteristics Y obtained after series connection to obtain final output characteristics
Figure 378314DEST_PATH_IMAGE001
And converting the three-dimensional image from three directions into one-dimensional characteristic coding operation by adopting a coordinated attention method, thereby editing context information remotely.
In previous studies, a global pooling method was typically used for channel attention to encode spatial information, but since it compressed global spatial information into channel descriptors, the location information was difficult to preserve. In order to prompt the attention module to store the position information while paying attention to the space channel, the attention module carries out remote information interaction and embeds the position information into the channel attention. Firstly, performing dimension reduction transformation on a three-dimensional image, and converting the three-dimensional image from three directions into one-dimensional characteristic coding operation by a global pooling method:
Figure 303545DEST_PATH_IMAGE009
(6)
wherein H, W and L are respectively the height, width and length of the three-dimensional image Y,iit is meant to be any height of the body,jit is meant to be of any width,lit is meant to be of any length,y c the input is characterized by Y and the dimensionality reduction operation is carried out on the c channel of Y,z c representing the output of the c-th channel. By 1D pooling images from three dimensions in this way, features aggregate input features along different directions into three separate direction-aware feature maps, respectively. Specifically, feature transformation is performed in different dimensions by three different posing kernel (H, 1), (1, w, 1) and (1, l):
Figure 422461DEST_PATH_IMAGE010
(7)
adaptivegpool 3d () represents a three-dimensional adaptive average pooling operation,
Figure 15116DEST_PATH_IMAGE011
the height characteristics after pooling are represented,
Figure 237150DEST_PATH_IMAGE012
the width characteristic after pooling is shown,
Figure 915256DEST_PATH_IMAGE013
indicating the pooled length features.
Produced by the above operations
Figure 943255DEST_PATH_IMAGE011
Figure 515050DEST_PATH_IMAGE012
Figure 235882DEST_PATH_IMAGE013
Merging concat operation is carried out on the three-dimensional pooled characteristics:
Figure 135705DEST_PATH_IMAGE014
and convolving the merged data M:
Figure 967394DEST_PATH_IMAGE015
cat (-) denotes the merge-splice operation and the Conv3d (-) function denotes the computation of a three-dimensional convolution for a given input. As in FIG. 4 for
Figure 269063DEST_PATH_IMAGE016
Sequentially carrying out the following operations, in order to avoid distribution data deviation, batch normalization by adopting a batch normalization method, carrying out nonlinear activation on a normalization result, separating the characteristics after nonlinear activation, dividing according to three dimensions of the image again, respectively carrying out convolution after division, and carrying out re-assignment operation on the whole characteristics after Sigmoid activation function to obtain the characteristics
Figure 36162DEST_PATH_IMAGE017
To preserve important feature information.
Coordinated attention coordinate attention encodes global information by using three complementary 1D global pooling operations so that coordinated attention can capture remote dependencies between spatial locations. The input features are integrated through a group self-correction coordinated attention network, feature extraction is carried out on the basis of expanding an effective field of view, and finally classification is carried out through a softmax (normalized exponential function) layer.
The method can construct a group self-correction coordinated attention network model, is suitable for any 3D image data set, obviously expands a field of view region through internal communication, enhances the expression capability of the network on the characteristics in the basic data set, expands an effective field of view through the model constructed by the method, and finally has high resolution of output data, more obvious characteristics and higher identifiability.
Previous studies found that it is very important to enhance the feature extraction capability of convolutional networks. CNNs (convolutional neural networks) have certain limitations that CNNs cannot capture shape features of global objects, and the field of view presented by convolution in past studies is limited, thereby limiting extraction of brain features. In addition, the convolution filter can only learn existing similar patterns, lacks a large receiving field to capture sufficiently high semantics, and ignores the importance of the field of view for feature extraction. Although the 3D CNN (three-dimensional convolutional neural network) with the hole convolution expands the field of view to a certain extent, a grid effect is caused to make the convolution kernel discontinuous, which actually increases the workload of convolution calculation, and does not realize effective expansion of the field of view, resulting in loss of output characteristics.
Because of the limited field of view of previous networks, the entire area of the feature cannot be captured. In contrast, the group self-calibration convolution is studied from a multi-level perspective, and long-distance context information is coded on the basis of not adding extra parameters and complexity, so that the model can better capture the whole area of the feature.
The common attention in previous studies is called channel attention and spatial attention. Research has proved that channel attention has a significant effect on improving model performance, but channel attention in previous research cannot pay good attention to position information. Spatial attention ignores the information in the channel domain, erroneously dealing with the picture features equally in each channel. For example, CBAM (Convolutional Attention Module) may infer intent in turn along two independent dimensions of channel and space, but only by taking the maximum and average values of multiple channels at each location as weighting coefficients, which takes into account only local range information. Although the parallel attention enhancement block uses both channel attention and spatial attention, the two are simply spliced in parallel without refining the channel attention. But since the convolution operation extracts information features by cross-channel and spatial information mixing. The position information plays an important role in generating the spatial feature map, is the key for capturing image structure information, and plays a key role in decision of the model. However, since the channel attention ignores the spatial position information and the spatial attention ignores the channel information, both attentions cannot be fully characterized. The channel attention is hierarchically refined and then combined with the spatial attention to be more sensitive to feature extraction. Therefore, it is proposed to use coordinated attention to compress global spatial information into channel descriptors, encode channel relations and long-term dependencies with precise location information, and be sensitive to changes in direction as well as location while capturing cross-channel information. Therefore, the network is helpful for more accurately positioning the region of interest, the expression capability of the network is enhanced, and the method has an obvious effect on optimizing the performance of the CNNs (convolutional neural networks).
Other multi-modal data mostly adopt the combination of PET (positron emission tomography) and MRI (Magnetic Resonance Imaging), which can increase the cost.
The invention adopts a group self-correction mode on the basis of a ResNet (residual error network) structure, overcomes the defects of limited field of view and the like in the previous research, and corrects the original space image through a plurality of groups of self-correction down-sampling operations. And the group self-correction convolution frame divides input data into a plurality of groups for parallel processing. The convolution filters of a particular layer in each group are separated into a plurality of non-uniform portions, the filters in each portion being utilized in a heterogeneous manner. Specifically, the self-calibrating convolution does not uniformly convolve the input in the original space, but first converts the input into a low-dimensional embedding by downsampling.
The nature of the attention mechanism is a series of attention distribution coefficients or weighting parameters that can be used to enhance or select important information of the target, thereby suppressing some irrelevant detailed information. Because the convolution operation is to extract information by integrating the information of the channel and the space, the invention extracts the characteristics by coordinating the attention mechanism, and simultaneously performs the attention of the channel and the space, and converts the three-dimensional image from three directions into the one-dimensional characteristic coding operation, thereby editing the context information at a long distance, breaking the tradition that the convolution can only be performed in a small area in the past, and having important significance for explicitly integrating richer characteristic information.
The method effectively pays attention to the field of view while self-correcting and expanding the field of view, extracts effective information in a global scope, and carries out global identification on the basis of not increasing redundant information.
The present invention is not intended for diagnosis of diseases, but the output results based on the model of the present invention can be used in medical research, for example, for the judgment of AD (alzheimer's disease), and contribute to early discovery of diseases.
Alzheimer's Disease (AD), an irreversible degenerative Disease of the central nervous system, accounts for approximately 60-80% of all dementia cases. Because AD (Alzheimer disease) is hidden, the disease condition is found to be more serious, and probably the brain of a patient is already diseased in the first 20 years of the disease. However, since there is no drug that can completely cure AD, it is important to effectively diagnose AD at an early stage of the disease.
Recent studies have demonstrated the effectiveness of Convolutional Neural Networks (CNNs) for AD diagnosis. Shantran Qiu [ P.S.J. Shantran Qiu, matthew I. Miller, chonghua Xue, xiao Zhou, cody Karjadi, gary H. Chang, ant S. Joshi, brigid Dwyer, shuhan Zhu, michelle Kaku, yan Zhou, yan J. Alderazi, arc Swaminathan, sachin Kedar, marie-Helene Saint-Hilaire, sanford H. Auerbach, juing Yuan, E. Alton Sartor, rhoda and Vijaya B. Kolachalama, development and evaluation of an interpreditable learning frame for Alzheimer's disease classification BRAIN, (2020.) an interpretable deep learning strategy is proposed, whereby multiple modalities of MRI (magnetic resonance imaging), age, gender and mini-mental state examination (MMSE) scores are input to a convolutional network to characterize AD (Alzheimer's disease). Evangeline Yee [ D.M. Evangeline Yee, karteek Popuri, lei Wang, mirza Faisal Beg, and for the and for the Alzheimer's Disease neurological Initiative, and the Australian Imaging Biomarkers and Life Imaging study, the Construction of MRI-Based Alzheimer's Disease scaled on impact 3D Convolutional Neural Network: comprehensive value on 7,902 Images from Multi-Center database of simulation's 2021 ] A convolution Network with three-dimensional Neural holes was proposed for the study of Disease classification.
In addition, attention mechanism in the network can better focus on different brain areas, and learn to distinguish differences of brain texture structures of different people. Attention mechanisms have proven useful in various classification tasks, and much research has been directed to improving their networks. VGG (Visual Geometry Group) of a Convolutional Attention Module (CBAM) is proposed to excite an Attention Network for diagnosing Alzheimer's Disease. Hao Guian [ K.P. Donghua Lu, gavin Weiiguang Ding, rakesh Balachandr, mirza Faisal Beg, for the Alzheimer's Disease neuroactive Initiative 8727 ], multiscale Deep Neural Networks based ANALYSIS of FDG-PET IMAGEs for the Early Diagnosis of Alzheimer's Disease, MEDICAL IMAGE ANALYSIS, (2018) ] ] parallel attention-enhancing block models are designed so that each location receives specific global information as a complement. SEnet (Squeze-and-Excitation Networks, compression and Excitation Networks) [ L.S. Jie Hu, samuel Albanie, gang Sun, enhua Wu, squeze-and-Excitation Networks, IEEE Conference on Computer Vision and Pattern Recognition, (2019) ] the intrinsic association between channels is made by simply compressing the features.
Further analysis is carried out based on the characteristics of the model output constructed by the method, so that the method has higher clinical use value.
The data obtained from the model obtained by the construction method of the invention is analyzed, as shown in fig. 6, the performance comparison report of the invention and other researches reports that the invention uses an ADNI database to train in the literature, and uses a non-overlapping ADNI data set to test, AD vs. LMCI (Late mild cognitive impairment patient) vs. EMCI (Early mild cognitive impairment patient) vs. NC (Normal control, normal cognition), EMCI vs. LMCI.
Wherein ACC = (TP + TN)/(TP + TN + FP + FN), SEN = TP/(TP + FN), SPE = TN/(TN + FP). TP, true Positive, TN, true Negative, FP, false Positive, FN, false Negative, precision (Accuracy, ACC), sensitivity (Sensitivity, SEN) and Specificity (SPE).
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and amendments can be made without departing from the principle of the present invention, and these modifications and amendments should also be considered as the protection scope of the present invention.

Claims (9)

1. The model construction method for brain magnetic resonance image processing based on deep learning is characterized by comprising the following steps:
acquiring a basic data set comprising structural magnetic resonance images in respect of the brain and functional magnetic resonance images in respect of the brain;
extracting data characteristics of a data set to obtain a data set for training, wherein the data set for training comprises texture characteristics of a structural magnetic resonance image, brain network characteristics of a functional magnetic resonance image and network attribute characteristics of the functional magnetic resonance image;
constructing a framework for adding a coordinated attention mechanism;
acquiring a target algorithm of the frame, wherein the target algorithm can self-correct data;
and training the frame by adopting the data set for training to obtain a group self-correction coordination attention network model.
2. The method of model construction for brain magnetic resonance image processing based on deep learning of claim 1, wherein the base dataset further comprises a clinical scale and the dataset for training comprises quantified values of the scale.
3. The model construction method for brain magnetic resonance image processing based on deep learning as claimed in claim 1, characterized in that the target algorithm for obtaining the framework comprises: and constructing a frame of the group self-correction coordinated attention network model, and acquiring a target algorithm in the frame.
4. The model building method for brain magnetic resonance image processing based on deep learning 4, the model building method for brain magnetic resonance image processing based on deep learning of claim 1, wherein the framework comprises a first convolution layer, a first batch of normalization layers, a first activation layer, a maximum pooling layer, a first residual unit, a second residual unit, a third residual unit, a fourth residual unit, a second activation layer, an adaptive pooling layer, a fully-connected layer and a normalization index function layer.
5. The model building method for brain magnetic resonance image processing based on deep learning of claim 4, wherein the first residual unit, the second residual unit, the third residual unit and the fourth residual unit each comprise a plurality of residual blocks, and the residual blocks comprise a second convolution layer, a group self-correcting convolution layer and a third convolution layer.
6. The model building method for brain magnetic resonance image processing based on deep learning of claim 1, wherein the framework comprises a group self-correcting convolutional layer.
7. The model construction method for brain magnetic resonance image processing based on deep learning as claimed in claim 1, characterized in that the target algorithm comprises the following steps:
dividing data X input to a group self-correcting convolution layer into X 1 And X 2 Two parts; the set of self-correcting convolutional layers includes four heterogeneous filters K 1 、K 2 、K 3 、K 4
In the data X 1 In the method, original space information is reserved, and direct feature mapping is carried out, namely Y 1 =F 1 (X 1 ),F 1 (X 1 )=X 1 *K 1 ,F 1 Representing a first convolution operation; in data X 2 In using a filter K 2 、K 3 And K 4 To X 2 Down-sampling to obtain Y 2 ;Y 1 Series Y 2 Obtaining Y;
performing coordinated attention on Y to achieve remote context information interaction and obtain data after feature extraction
Figure 654177DEST_PATH_IMAGE001
8. The method for model construction based on deep learning for brain magnetic resonance image processing as claimed in claim 7, wherein the filter K is used 2 、K 3 And K 4 To X 2 Down-sampling to obtain Y 2 The specific process comprises the following steps:
x is to be 2 Carrying out an average pooling operation of the three-dimensional space:
M 2 =AvgPool3d(X 2 ) (1)
wherein, avgPool3d (-) denotes three-dimensional average pooling, M 2 Is X 2 Averaging data generated after pooling;
based on K 2 Filter pair average pooled feature M 2 Carrying out feature mapping:
Figure 758399DEST_PATH_IMAGE002
(2)
F 2 denoted as a second convolution operation, is performed,
Figure 180153DEST_PATH_IMAGE003
represents M 2 The result of performing the second convolution operation;
X 2 and
Figure 242787DEST_PATH_IMAGE003
after normalization, activating by using Sigmoid activation function to obtain
Figure 117202DEST_PATH_IMAGE004
Figure 431640DEST_PATH_IMAGE005
(3)
Figure 707900DEST_PATH_IMAGE006
(4)
Figure 941435DEST_PATH_IMAGE007
(5)
F 3 Denotes a third convolution operation, F 4 Denotes a fourth convolution operation, Y 2 To represent
Figure 303147DEST_PATH_IMAGE008
The result of performing the fourth convolution operation, S (-) represents the Sigmoid activation function.
9. The model construction method for brain magnetic resonance image processing based on deep learning of claim 7, wherein the Y is subjected to coordinated attention to achieve distant context information interaction, and data after feature extraction is obtained
Figure 139385DEST_PATH_IMAGE009
The specific process comprises the following steps:
carrying out dimension reduction transformation on the three-dimensional image Y, and converting the three-dimensional image from three directions into one-dimensional feature coding operation by a global pooling method:
Figure 270152DEST_PATH_IMAGE010
(6)
wherein H, W and L are respectively the height, width and length of the three-dimensional image,iwhich means that the height of the optical disk is arbitrary,jit is meant to be of any width,ldenotes an arbitrary length, y c The input is characterized by Y and the dimensionality reduction operation is carried out on the c channel of Y,z c represents the output of the c-th channel;
feature transformation is performed according to different dimensions by three different pooling kernels (H, 1), (1, W, 1) and (1, L):
Figure 674588DEST_PATH_IMAGE011
(7)
adaptivegpool 3d (-) represents a three-dimensional adaptive average pooling operation,
Figure 523595DEST_PATH_IMAGE012
the height characteristics after pooling are represented,
Figure 38890DEST_PATH_IMAGE013
the width characteristic after pooling is shown,
Figure 165109DEST_PATH_IMAGE014
representing the length characteristics after pooling;
produced by the above operations
Figure 740447DEST_PATH_IMAGE012
Figure 76751DEST_PATH_IMAGE013
Figure 395736DEST_PATH_IMAGE014
Merging concat operation of the three-dimensional pooled features:
Figure 130124DEST_PATH_IMAGE015
and convolving the merged data M:
Figure 876363DEST_PATH_IMAGE016
to a
Figure 434383DEST_PATH_IMAGE017
Sequentially adopting a batch normalization method to avoid distribution data deviation, carrying out nonlinear activation, separating the characteristics after nonlinear activation, dividing according to three dimensions of the image, carrying out convolution respectively, and carrying out re-assignment operation on the overall characteristics after a Sigmoid activation function to obtain
Figure 291481DEST_PATH_IMAGE018
Where cat (-) represents the merge-splice operation and the Conv3d (-) function represents the computation of the three-dimensional convolution of a given input.
CN202211365286.4A 2022-11-03 2022-11-03 Model construction method for brain magnetic resonance image processing based on deep learning Active CN115409743B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211365286.4A CN115409743B (en) 2022-11-03 2022-11-03 Model construction method for brain magnetic resonance image processing based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211365286.4A CN115409743B (en) 2022-11-03 2022-11-03 Model construction method for brain magnetic resonance image processing based on deep learning

Publications (2)

Publication Number Publication Date
CN115409743A true CN115409743A (en) 2022-11-29
CN115409743B CN115409743B (en) 2023-03-24

Family

ID=84169099

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211365286.4A Active CN115409743B (en) 2022-11-03 2022-11-03 Model construction method for brain magnetic resonance image processing based on deep learning

Country Status (1)

Country Link
CN (1) CN115409743B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116129235A (en) * 2023-04-14 2023-05-16 英瑞云医疗科技(烟台)有限公司 Cross-modal synthesis method for medical images from cerebral infarction CT to MRI conventional sequence

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111882503A (en) * 2020-08-04 2020-11-03 深圳高性能医疗器械国家研究院有限公司 Image noise reduction method and application thereof
CN112329871A (en) * 2020-11-11 2021-02-05 河北工业大学 Pulmonary nodule detection method based on self-correction convolution and channel attention mechanism
CN112837274A (en) * 2021-01-13 2021-05-25 南京工业大学 Classification and identification method based on multi-mode multi-site data fusion
CN114724155A (en) * 2022-04-19 2022-07-08 湖北工业大学 Scene text detection method, system and equipment based on deep convolutional neural network
CN114757911A (en) * 2022-04-14 2022-07-15 电子科技大学 Magnetic resonance image auxiliary processing system based on graph neural network and contrast learning
CN115049629A (en) * 2022-06-27 2022-09-13 太原理工大学 Multi-mode brain hypergraph attention network classification method based on line graph expansion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111882503A (en) * 2020-08-04 2020-11-03 深圳高性能医疗器械国家研究院有限公司 Image noise reduction method and application thereof
CN112329871A (en) * 2020-11-11 2021-02-05 河北工业大学 Pulmonary nodule detection method based on self-correction convolution and channel attention mechanism
CN112837274A (en) * 2021-01-13 2021-05-25 南京工业大学 Classification and identification method based on multi-mode multi-site data fusion
CN114757911A (en) * 2022-04-14 2022-07-15 电子科技大学 Magnetic resonance image auxiliary processing system based on graph neural network and contrast learning
CN114724155A (en) * 2022-04-19 2022-07-08 湖北工业大学 Scene text detection method, system and equipment based on deep convolutional neural network
CN115049629A (en) * 2022-06-27 2022-09-13 太原理工大学 Multi-mode brain hypergraph attention network classification method based on line graph expansion

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ANEES ABROL ET.AL: "Multimodal Data Fusion of Deep Learning and Dynamic Functional Connectivity Features to Predict Alzheimer’s Disease Progression", 《IEEE》 *
JIANG-JIANG LIU ET.AL: "Improving Convolutional Networks with Self-Calibrated Convolutions", 《CVPR2020》 *
TONG LUO ET.AL: "A Vehicle-to-Infrastructure beyond Visual Range Cooperative Perception Method Based on Heterogeneous Sensors", 《ENERGIES》 *
ZHENYU LIN ET.AL: "CT-Guided Survival Prediction of Esophageal Cancer", 《IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116129235A (en) * 2023-04-14 2023-05-16 英瑞云医疗科技(烟台)有限公司 Cross-modal synthesis method for medical images from cerebral infarction CT to MRI conventional sequence

Also Published As

Publication number Publication date
CN115409743B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
Cheng et al. CNNs based multi-modality classification for AD diagnosis
Lei et al. Relational-regularized discriminative sparse learning for Alzheimer’s disease diagnosis
WO2023077603A1 (en) Prediction system, method and apparatus for abnormal brain connectivity, and readable storage medium
Li et al. Alzheimer's disease classification based on combination of multi-model convolutional networks
US11170300B2 (en) Explainable neural net architecture for multidimensional data
Akay et al. Deep learning classification of systemic sclerosis skin using the MobileNetV2 model
Sudharsan et al. Alzheimer's disease prediction using machine learning techniques and principal component analysis (PCA)
Pannu et al. Deep learning based image classification for intestinal hemorrhage
CN111738363B (en) Alzheimer disease classification method based on improved 3D CNN network
CN112674720B (en) Alzheimer disease pre-judgment method based on 3D convolutional neural network
Rahim et al. Prediction of Alzheimer's progression based on multimodal deep-learning-based fusion and visual explainability of time-series data
CN115409743B (en) Model construction method for brain magnetic resonance image processing based on deep learning
Fouladi et al. The use of artificial neural networks to diagnose Alzheimer’s disease from brain images
Albishri et al. AM-UNet: automated mini 3D end-to-end U-net based network for brain claustrum segmentation
Mehmood et al. Early Diagnosis of Alzheimer's Disease Based on Convolutional Neural Networks.
Thangavel et al. EAD-DNN: Early Alzheimer's disease prediction using deep neural networks
Zaina et al. An exemplar pyramid feature extraction based Alzheimer disease classification method
CN112863664A (en) Alzheimer disease classification method based on multi-modal hypergraph convolutional neural network
Dayananda et al. A squeeze U-SegNet architecture based on residual convolution for brain MRI segmentation
Dai et al. DE-JANet: A unified network based on dual encoder and joint attention for Alzheimer’s disease classification using multi-modal data
CN116797817A (en) Autism disease prediction technology based on self-supervision graph convolution model
CN116310479A (en) Alzheimer's disease early identification system based on multi-center structure magnetic resonance image
Anitha et al. Diagnostic framework for automatic classification and visualization of Alzheimer’s disease with feature extraction using wavelet transform
Pallawi et al. Study of Alzheimer’s disease brain impairment and methods for its early diagnosis: a comprehensive survey
Rahim et al. Time-series visual explainability for Alzheimer’s disease progression detection for smart healthcare

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant