CN115409743B - Model construction method for brain magnetic resonance image processing based on deep learning - Google Patents

Model construction method for brain magnetic resonance image processing based on deep learning Download PDF

Info

Publication number
CN115409743B
CN115409743B CN202211365286.4A CN202211365286A CN115409743B CN 115409743 B CN115409743 B CN 115409743B CN 202211365286 A CN202211365286 A CN 202211365286A CN 115409743 B CN115409743 B CN 115409743B
Authority
CN
China
Prior art keywords
magnetic resonance
data
brain
resonance image
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211365286.4A
Other languages
Chinese (zh)
Other versions
CN115409743A (en
Inventor
李奇
于潇洁
武岩
宋雨
高宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University of Science and Technology
Original Assignee
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University of Science and Technology filed Critical Changchun University of Science and Technology
Priority to CN202211365286.4A priority Critical patent/CN115409743B/en
Publication of CN115409743A publication Critical patent/CN115409743A/en
Application granted granted Critical
Publication of CN115409743B publication Critical patent/CN115409743B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/004Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
    • A61B5/0042Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part for the brain
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • General Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)

Abstract

A model construction method for brain magnetic resonance image processing based on deep learning relates to the technical field of brain magnetic resonance image processing, and solves the problem of designing a network model construction method with high resolution of output features, and the method comprises the following steps: a base dataset comprising structural and functional magnetic resonance images in respect of the brain; extracting data characteristics of a data set to obtain a data set for training, wherein the data set for training comprises texture characteristics of a structural magnetic resonance image, brain network characteristics of a functional magnetic resonance image and network attribute characteristics of the functional magnetic resonance image; constructing a framework for adding a coordinated attention mechanism; acquiring a target algorithm of the frame, wherein the target algorithm can self-correct data; and training the frame by adopting the data set for training to obtain a group self-correction coordination attention network model. The invention expands the effective field of view, and the output data has high resolution and more obvious characteristics.

Description

Model construction method for brain magnetic resonance image processing based on deep learning
Technical Field
The invention relates to the technical field of brain magnetic resonance image processing, in particular to a model construction method for brain magnetic resonance image processing based on deep learning.
Background
With the development of science and technology, people's health is more and more concerned, and research on human brain has never been stopped and is more and more emphasized. Because the model building and model processing have obvious advantages, the model is also generally adopted for analyzing and processing the brain data for the human brain data, and the model is generally built through a Convolutional Neural Network (CNNs).
Disclosure of Invention
In order to solve the problems, the invention provides a model construction method for brain magnetic resonance image processing based on deep learning.
The technical scheme adopted by the invention for solving the technical problem is as follows:
the model construction method for brain magnetic resonance image processing based on deep learning comprises the following steps:
acquiring a basis dataset comprising structural magnetic resonance images in respect of the brain and functional magnetic resonance images in respect of the brain;
extracting data characteristics of a data set to obtain a data set for training, wherein the data set for training comprises texture characteristics of a structural magnetic resonance image, brain network characteristics of a functional magnetic resonance image and network attribute characteristics of the functional magnetic resonance image;
constructing a framework for adding a coordinated attention mechanism;
acquiring a target algorithm of the frame, wherein the target algorithm can self-correct data;
and training the frame by adopting the data set for training to obtain a group self-correction coordination attention network model.
The invention has the beneficial effects that:
the method can construct a group self-correction coordinated attention network model for brain magnetic resonance image processing based on deep learning, is suitable for any 3D image data set, obviously expands a field of view region through internal communication, enhances the expression capability of the network on features in a basic data set, expands an effective field of view through the model constructed by the method, and finally outputs data with high resolution, more obvious features and more discriminative property.
Drawings
FIG. 1 is a general flow diagram of the present invention.
Fig. 2 is a structural view of the frame.
Fig. 3 is a structural diagram of a residual block.
FIG. 4 is a data processing process for convolutional layer reconciliation attention.
FIG. 5 is a flow chart of self-calibration.
Figure 6 is a report comparing the performance of the present invention with other studies.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings.
A model construction method for brain magnetic resonance image processing based on deep learning is shown in figure 1 and comprises the following steps:
acquiring a basic data set comprising structural magnetic resonance images in respect of the brain and functional magnetic resonance images in respect of the brain;
extracting data characteristics of a data set to obtain a data set for training, wherein the data set for training comprises texture characteristics of a structural magnetic resonance image, brain network characteristics of a functional magnetic resonance image and network attribute characteristics of the functional magnetic resonance image;
constructing a framework for adding a coordinated attention mechanism;
acquiring a target algorithm of the frame, wherein the target algorithm can self-correct data, namely the target algorithm can self-correct;
and training the frame by adopting the data set for training to obtain a self-correcting coordination attention network model, namely obtaining the model for brain magnetic resonance image processing based on deep learning.
The base dataset also includes a clinical scale, and the dataset for training includes quantified values of the scale. Clinical scales are in particular scales that are clinically used for assessing the severity of a disease, for example for assessing the severity of alzheimer's disease.
The group self-correction coordinated attention network model constructed by the method comprises a group convolution, a self-correction convolution and a coordinated attention network.
Features are extracted for different modal data in the basic data set, wherein textural features are extracted for structural magnetic resonance imaging data, and brain network features and network attribute features are extracted for functional magnetic resonance imaging. Structural magnetic resonance imaging (srim) and functional magnetic resonance imaging (fMRI) data are image data and clinical scales (i.e., clinical assessment test scales that patients need to fill in/answer) are non-image data, including but not limited to clinical scales. Clinical scales include MMSE (mini mental state examination scale) and CDR (clinical dementia rating scale). The quantified value of the scale includes the total score of the test and may also include the score and total score of the test for each test item in the scale.
The target algorithm for obtaining the frame comprises the following steps: and constructing a frame of the group self-correction coordinated attention network model, and acquiring a target algorithm in the frame. And finally inputting the texture features, the brain network features, the network attribute features and the quantized values of the scale into a group self-correction coordination attention network model for further operation. The overall framework is as shown in fig. 2, and is composed of 1 first convolution layer (Cov layer), 1 first BN layer (batch normalization layer), 1 maximum pooling layer (MaxPool layer), 1 first active layer (ReLU layer), one second active layer, 16 residual blocks (ResBlock), 1 adaptive pooling layer (adaptive avgpool layer), 1 fully connected layer (FC layer), and 1 final softmax (normalized exponential function) layer. The model with a plurality of output nodes can research multi-classification problems. The sequence is shown in the specification and figure 2, the model is in a series structure, and the output of the previous layer is the input of the next layer.
The convolutional layer can extract different features of the input, and more complex features can be extracted through more iterations of the convolutional layer. To reduce redundant features, the introduction of errors into the pooling layer is avoided. The problems of gradient explosion and gradient disappearance are prevented by the normalization layer. And the model adopts the residual block in the classical ResNet network, and the effect of the model can be steadily improved by increasing the number of deep layers by skipping the layers with poor effect. Finally, the full connection layer is used for converting the feature diagram into one-dimensional features, and robustness is improved. The first active layer and the second active layer both adopt the modified linear unit as a nonlinear active function. And performing multi-scale feature learning by adopting a maximum pooling layer. The 16 residual blocks may have the same structure, and the 16 residual blocks form a residual network ResNet, where the residual network includes 4 residual units, the first residual unit includes 3 residual blocks, the second residual unit includes 4 residual blocks, the third residual unit includes 6 residual blocks, and the fourth residual unit includes 3 residual blocks. The structure of the residual block is shown in fig. 3, and includes a second convolution layer, a Group self-corrected convolution layer, and a third convolution layer, in fig. 3, the Group self-corrected coordination convolution is a Group self-corrected coordination convolution. Each residual block has 3 convolutional layers, including group self-correcting convolutions (as shown in fig. 5), the three convolutions are connected in series, and the residual block of ResNet follows the convolutional layer design of VGG by stacking a plurality of small convolution kernels instead of a large convolution kernel to reduce the required parameters. The ReLU activation function is used to save the computational effort of the whole process and to alleviate the occurrence of the overfitting problem. The bank self-correcting convolutional layer carries a coordinated attention mechanism/module. The group self-correction data group is directed to a plurality of groups, not referred to as intra-group or extra-group, and the operation in each group is the same, and the data input to the group is divided into two parts for subsequent operation. The intra-group can self-correct the data within the group. The data processing of the group self-correcting convolutional layer is shown in FIG. 5.
The target algorithm is detailed below:
the convolution transform of the former filter is calibrated by using the low-dimensional mode after the latter filter transform. The self-calibration convolution group realizes information communication with the filter through multi-group heterogeneous convolution, so that the exact position of each target feature is more accurately positioned, the convolution field is increased, and meanwhile, a more discriminant feature representation is generated. And uniformly inputting the features after the group self-correction processing into the attention model. The method of group self-correction is as follows:
as shown in fig. 5, the group self-correction method solves the problem of field limitation, and corrects the original aerial image through multiple groups of self-calibration down-sampling operations. Each set of self-correcting convolution layers, namely each set of self-correcting convolution layers, comprises four heterogeneous filters, wherein the four heterogeneous filters are respectively K 1 、K 2 、K 3 、K 4 Dividing data X input to the group self-correction convolutional layer into X 1 And X 2 Two parts. At X 1 In partial data, original space information is retained, and simple characteristic direct mapping is carried out, namely Y 1 =F 1 (X 1 ) In which F is 1 (X 1 )=X 1 *K 1 ,F 1 Representing a first convolution operation, in this way preventing feature loss. In the second part X 2 In (3), the remaining three filter pairs X are utilized 2 Down-sampling to obtain Y 2 Self-calibration operation is achieved from the whole body while at Y 2 A recalibration operation is performed internally. Y is 1 Series Y 2 And obtaining Y, and then carrying out coordinated attention on the Y, namely adding a coordinated attention mechanism in the frame, thereby achieving the interaction of the remote context information and finally obtaining the data Y' after the characteristic extraction.
Self-calibrating convolution makes the connection between each feature map more intuitive through an average pooling operation, and in this way, local context information of the features is captured, and meanwhile, the fitting phenomenon is reduced. Thus will give X of input 2 And carrying out average pooling operation of three-dimensional space:
M 2 =AvgPool3d(X 2 ) (1)
wherein AvgPool3d () represents the three-dimensional average pooling, M 2 Is X 2 The data generated after performing the averaging pooling operation.
Based on K 2 Filter pair average pooled feature M 2 Carrying out feature mapping:
X′ 2 =F 2 (M 2 ) ( 2 )
F 2 denoted as second convolution operation, X' 2 Represents M 2 The result of performing the second convolution operation.
X 2 And X' 2 And activating by using a Sigmoid activation function after normalization:
X″ 2 =S(X 2 +X′ 2 ) (3)
wherein S (-) denotes a Sigmoid activation function.
X 'produced above' 2 And X 2 Together with Sigmoid activation to obtain X ″) 2 ,X″ 2 Is X' 2 And X 2 Together with the result of Sigmoid activation.
From X' 2 And performing further calibration work as the calibration weight of the element-wise multiplication corresponding to the co-located element:
Y′ 2 =F 3 (X 2 )·X″ 2 (4)
F 3 representing a third convolution operation, i.e. first pair X 2 Carrying out F 3 A third convolution operation, and then the resulting result is compared with X ″ 2 Carrying out the corresponding multiplication operation of the same-position elements to obtain calibrated data Y' 2
The calibrated data Y' 2 Performing a feature transformation may be expressed as:
Y 2 =F 4 (Y′ 2 ) (5)
F 4 denotes a fourth convolution operation, Y 2 Represents Y' 2 The result of performing the fourth convolution operation.
Output characteristic Y after calibration 2 Feature Y of the original spatial context 1 Further combination is carried out, and the characteristics Y obtained after series connection are coordinated with attention, so that the final output characteristics Y' are obtained.
And a coordinated attention method is adopted to convert the three-dimensional image from three directions into one-dimensional feature coding operation, so that the context information is edited remotely.
In previous studies, a global pooling method was typically used for channel attention to encode spatial information, but since it compressed global spatial information into channel descriptors, the location information was difficult to preserve. In order to prompt the attention module to store the position information while paying attention to the space channel, the attention module carries out remote information interaction and embeds the position information into the channel attention. Firstly, performing dimension reduction transformation on a three-dimensional image, and converting the three-dimensional image from three directions into one-dimensional characteristic coding operation by a global pooling method:
Figure GDA0003988273650000051
wherein H, W and L are height, width and length of the three-dimensional image Y respectively, i represents any height, j represents any width, L represents any length, and Y represents any length c The input is characterized by Y and the c channel of Y is subjected to dimension reduction operation, z c Representing the output of the c-th channel. In this way, the image is 1D pooled from three dimensions, and the features aggregate the input features along different directions into three separate direction-aware feature maps, respectively. Specifically, feature transformation is performed in different dimensions by three different posing kernel (H, 1), (1, w, 1) and (1, l):
H′=AdaptiveAvgPool3d(H,1,1)
W′=AdaptiveAvgPool3d(1,W,1)
L′=AdaptiveAvgPool3d(1,1,L) (7)
adaptivegpool 3d () represents a three-dimensional adaptive averaging pooling operation, H represents a height feature after pooling, W 'represents a width feature after pooling, and L' represents a length feature after pooling.
Carrying out merging concat operation on the three-dimensional pooled characteristics of H ', W ' and L ' generated by the operation: m = cat (H ', W ', L '), and the merged data M is convolved: m = Conv3d (M), cat (·) denotes the merge splicing operation, and the Conv3d (·) function denotes the computation of a three-dimensional convolution of a given input. As shown in fig. 4, sigmoid in the figure represents the name of an activation function, and M 'is sequentially subjected to batch normalization by using a batch normalization method to avoid distribution data deviation, the normalization result is subjected to nonlinear activation, the nonlinear activated features are separated, the images are divided according to three dimensions, after division, convolution is respectively performed, and after the Sigmoid activation function, the overall features are subjected to reassignment operation to obtain Y', so that important feature information is retained.
Coordinated attention coordinate attention encodes global information by using three complementary 1D global pooling operations so that coordinated attention can capture remote dependencies between spatial locations. The input features are integrated through a group self-correction coordinated attention network, feature extraction is carried out on the basis of expanding an effective field of view, and finally classification is carried out through a softmax (normalized exponential function) layer.
The method can construct a group self-correction coordinated attention network model, is suitable for any 3D image data set, obviously expands a field of view region through internal communication, enhances the expression capability of the network on the characteristics in the basic data set, expands an effective field of view through the model constructed by the method, and finally has high resolution of output data, more obvious characteristics and higher identifiability.
Previous studies found that it is very important to enhance the feature extraction capability of convolutional networks. CNNs (convolutional neural networks) have certain limitations that CNNs cannot capture shape features of global objects, and the field of view presented by convolution in past studies is limited so as to limit extraction of brain features. In addition, the convolution filter can only learn existing similar patterns, lacks a large received field to capture sufficiently high semantics, neglecting the importance of the field of view for feature extraction. Although the 3D CNN (three-dimensional convolutional neural network) with the hole convolution expands the field of view to a certain extent, a grid effect is caused to make the convolution kernel discontinuous, which actually increases the workload of convolution calculation, and does not realize effective expansion of the field of view, resulting in loss of output characteristics.
Because of the limited field of view of previous networks, the entire area of the feature cannot be captured. In contrast, the group self-calibration convolution is studied from multiple levels, and long-distance context information is coded on the basis of not adding extra parameters and complexity, so that the model can better capture the whole area of the feature.
The common attention in previous studies is called channel attention and spatial attention. Research has proved that channel attention has a significant effect on improving model performance, but channel attention in previous research cannot pay good attention to position information. Spatial attention ignores the information in the channel domain, erroneously dealing equally with the picture features in each channel. For example, CBAM (Convolutional attention Module) may infer intent in turn along two independent dimensions of channel and space, but only by taking the maximum and average values of multiple channels at each location as weighting coefficients, which takes into account only local range information. Although the parallel attention enhancement block uses both channel attention and spatial attention, the two are simply spliced in parallel without refining the channel attention. But since the convolution operation extracts information features by cross-channel and spatial information mixing. The position information plays an important role in generating the spatial feature map, is the key for capturing image structure information, and plays a key role in decision of the model. However, since the channel attention ignores the spatial position information and the spatial attention ignores the channel information, both attentions cannot be fully characterized. The channel attention is hierarchically refined and then combined with the spatial attention to be more sensitive to feature extraction. Therefore, it is proposed to use coordinated attention to compress global spatial information into channel descriptors, encode channel relations and long-term dependencies with precise location information, and be sensitive to changes in direction as well as location while capturing cross-channel information. Therefore, the network is helpful for more accurately positioning the region of interest, the expression capability of the network is enhanced, and the method has an obvious effect on optimizing the performance of the CNNs (convolutional neural networks).
Other multi-modal data mostly adopt the combination of PET (positron emission tomography) and MRI (Magnetic Resonance Imaging), which can increase the cost.
The invention adopts a group self-correction mode on the basis of a ResNet (residual error network) architecture, overcomes the defects of limited field of view and the like in the previous research, and corrects the original space image through a plurality of groups of self-correction downsampling operations. And the group self-correction convolution frame divides input data into a plurality of groups for parallel processing. The convolution filters of a particular layer in each group are separated into a plurality of non-uniform portions, the filters in each portion being utilized in a heterogeneous manner. Specifically, the self-calibrating convolution does not uniformly convolve the input in the original space, but first converts the input into a low-dimensional embedding by downsampling.
The nature of the attention mechanism is a series of attention distribution coefficients or weighting parameters that can be used to enhance or select important information of the target, thereby suppressing some irrelevant detailed information. Because the convolution operation is used for extracting information by integrating information of channels and spaces, the invention extracts features by coordinating attention mechanism, and simultaneously performs attention of channels and spaces, and converts a three-dimensional image from three directions into one-dimensional feature coding operation, thereby editing context information remotely, breaking the tradition that the conventional convolution can only be performed in a small area, and having important significance for explicitly integrating richer feature information.
The invention effectively pays attention to the vision field expansion by self-correction, extracts effective information in the global scope and carries out global identification on the basis of not increasing redundant information.
The present invention is not intended for diagnosis of diseases, but the output results based on the model of the present invention can be used in medical research, for example, for the judgment of AD (alzheimer's disease), and contribute to early discovery of diseases.
Alzheimer's Disease (AD), an irreversible degenerative Disease of the central nervous system, accounts for approximately 60-80% of all dementia cases. Because AD (Alzheimer disease) is hidden, the disease condition is found to be more serious, and probably the brain of a patient is already diseased in the first 20 years of the disease. However, since there is no drug that can completely cure AD, it is important to effectively diagnose AD at an early stage of the disease.
Recent studies have demonstrated the effectiveness of Convolutional Neural Networks (CNNs) for AD diagnosis. Shantran Qiau [ P.S.J.Shantran Qiau, matthew I.Miller, chonghua Xue, xiao Zhou, cody Karjadi, gary H.Chang, ant S.Joshi, brigid Dwyer, shuhan Zhu, michelle Kaku, yan Zhou, yanzan J.Alderazi, arc Swamidithane, sachin Kedar, marie-Helene Saint-Hilaire, sanford H.Auerbach, juing Yuan, E.Alton Sartor, rhodada and Vijaya B.Kolachalama, development and validation of inserting prediction template for Alzheimer's', an age-learning by micro-resonance imaging (MRI) and interpretation of age-learning by a micro-acoustic resonance imaging (AMELD) strategy, a micro-acoustic imaging (AMSE) and age learning by a micro-acoustic resonance imaging (AMELD). Evangeline Yee [ D.M. Evangeline Yee, kartek Popur, lei Wang, mirzaFaisal berg, and for the Alzheimer's Disease neurological Initiative, and the Automation Imaging and Life flight Imaging, construction of MRI-Based Alzheimer's Disease Score on impact 3D constraint neural network, (2021) ] proposes to construct a three-dimensional Convolutional neural network with convolution for AD classification studies.
In addition, the attention mechanism in the network can better focus on different brain areas, and learn and distinguish differences of brain texture structures of different people. Attention mechanisms have proven useful in various classification tasks, and much research has been directed to improving their networks. Shui-HuaWang [ q.z. Shui-HuaWang, mingYang and yu-Dong Zhang, ADVIAN: VGG (Visual Geometry Group) of an integrated convolution attention Module (CBAM) is proposed to excite an attention network to diagnose Alzheimer's Disease. Hao Guan [ K.P. Donghua Lu, gavinWeiiguang Ding, rakeshBalachandr, mirza Faisal Bel, for the Alzheimer's Disease Neuroimaging Initiative, multiscale Deep Neralnetwork based analysis of FDG-PET images for the Early Diagnosis of Alzheimer's Disease. Media IMAGEALYSIS, (2018) ] parallel attention enhancement block models are designed so that each location receives specific global information as a complement. Sente (Squeeze-and-excitation networks, compression and excitation networks) [ l.s.jee Hu, samuelAlbanie, gan Sun, enhuaWu, squeeze-and-excitation networks, ieee Conference on computer vision and Pattern Recognition, (2019) ] the intrinsic association between channels is made by simply compressing the features.
The characteristics of the model output constructed by the method are further analyzed, so that the method has higher clinical use value.
The data obtained by the model obtained by the construction method of the invention is analyzed, as shown in fig. 6, the performance comparison report of the invention and other researches reports that ADNI database is used for training in literature, and non-overlapping ADNI data set is used for testing, AD vs. LMCI (Late cognitive impairment patient) vs. EMCI (Early cognitive impairment patient) vs. NC (Normal control, normal cognition), EMCI vs. LMCI.
Wherein ACC = (TP + TN)/(TP + TN + FP + FN), SEN = TP/(TP + FN), SPE = TN/(TN + FP). TP, true Positive; TN, true Negative; FP, false Positive; FN, false Negative, accuracy (ACC), sensitivity (SEN) and Specificity (SPE).
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and amendments can be made without departing from the principle of the present invention, and these modifications and amendments should also be considered as the protection scope of the present invention.

Claims (4)

1. The model construction method for brain magnetic resonance image processing based on deep learning is characterized by comprising the following steps:
acquiring a basic data set comprising structural magnetic resonance images in respect of the brain and functional magnetic resonance images in respect of the brain;
extracting data characteristics of a data set to obtain a data set for training, wherein the data set for training comprises texture characteristics of a structural magnetic resonance image, brain network characteristics of a functional magnetic resonance image and network attribute characteristics of the functional magnetic resonance image;
constructing a framework for adding a coordinated attention mechanism;
acquiring a target algorithm of the frame, wherein the target algorithm can self-correct data;
training the frame by adopting the data set for training to obtain a group self-correction coordination attention network model;
the framework comprises a first convolution layer, a first batch of normalization layers, a first activation layer, a maximum pooling layer, a first residual error unit, a second residual error unit, a third residual error unit, a fourth residual error unit, a second activation layer, an adaptive pooling layer, a full connection layer and a normalization index function layer;
the first residual error unit, the second residual error unit, the third residual error unit and the fourth residual error unit all comprise a plurality of residual error blocks, and the residual error blocks comprise a second convolutional layer, a group self-correction convolutional layer and a third convolutional layer;
the target algorithm comprises the following steps:
dividing data X input to a group self-correcting convolution layer into X 1 And X 2 Two parts; the set of self-correcting convolutional layers comprises four heterogeneous filters K 1 、K 2 、K 3 、K 4
In data X 1 In the method, original space information is reserved, and direct feature mapping is carried out, namely Y 1 =F 1 (X 1 ),F 1 (X 1 )=X 1 *K 1 ,F 1 Representing a first convolution operation; in the data X 2 In using a filter K 2 、K 3 And K 4 To X 2 Down-sampling to obtain Y 2 ;Y 1 Series Y 2 Obtaining Y;
and coordinating the attention of the Y to achieve the purpose of remote context information interaction to obtain data Y' after the features are extracted.
2. The method of model construction for brain magnetic resonance image processing based on deep learning of claim 1, wherein the base dataset further comprises a clinical scale and the dataset for training comprises quantified values of the scale.
3. The model construction method for brain magnetic resonance image processing based on deep learning of claim 1, characterized in that the filter K is utilized 2 、K 3 And K 4 To X 2 Down-sampling to obtain Y 2 The specific process comprises the following steps:
x is to be 2 Carrying out an average pooling operation of the three-dimensional space:
M 2 =AvgPool3d(X 2 ) (1)
wherein AvgPool3d (-) represents three-dimensional average pooling, M 2 Is X 2 Averaging data generated after pooling;
based on K 2 Filter pair average pooled feature M 2 Carrying out feature mapping:
X′ 2 =F 2 (M 2 ) (2)
F 2 denoted as second convolution operation, X' 2 Represents M 2 The result of performing the second convolution operation;
X 2 and X' 2 After normalization, activating by using Sigmoid activation function to obtain X ″ 2
X″ 2 =S(X 2 +X′ 2 ) (3)
Y 2 ′=F 3 (X 2 )·X″ 2 (4)
Y 2 =F 4 (Y 2 ′) (5)
F 3 Denotes a third convolution operation, F 4 Denotes a fourth convolution operation, Y 2 Represents Y' 2 The result of performing the fourth convolution operation, S (-) represents the Sigmoid activation function.
4. The model construction method for brain magnetic resonance image processing based on deep learning of claim 1, wherein the specific process of performing coordinated attention on Y to achieve remote context information interaction and obtaining data Y' after feature extraction is as follows:
carrying out dimension reduction transformation on the three-dimensional image Y, and converting the three-dimensional image from three directions into one-dimensional feature coding operation by a global pooling method:
Figure FDA0004010072290000021
wherein, H, W and L are height, width and length of the three-dimensional image respectively, i represents any height, j represents any width, L represents any length, y c The input is characterized by Y and the c channel of Y is subjected to dimension reduction operation, z c Represents the output of the c-th channel;
feature transformation is performed in different dimensions by three different pooling kernels (H, 1), (1, W, 1) and (1, L):
H′=AdaptiveAvgPool3d(H,1,1)
W′=AdaptiveAvgPool3d(1,W,1)
L′=AdaptiveAvgPool3d(1,1,L) (7)
AdaptevangPool 3d (·) represents a three-dimensional adaptive average pooling operation, H ' represents a height feature after pooling, W ' represents a width feature after pooling, and L ' represents a length feature after pooling;
carrying out merging concat operation on the three-dimensional pooled characteristics of H ', W ' and L ' generated by the operation: m = cat (H ', W ', L '), and the merged data M is convolved: m ' = Conv3d (M), sequentially adopting a batch normalization method for M ' to avoid distribution data deviation, carrying out nonlinear activation, separating features after nonlinear activation, dividing according to three dimensions of an image and respectively carrying out convolution, and carrying out reassignment operation on the whole features after a Sigmoid activation function to obtain Y ';
where cat (-) represents the merge-splice operation and the Conv3d (-) function represents the computation of the three-dimensional convolution of a given input.
CN202211365286.4A 2022-11-03 2022-11-03 Model construction method for brain magnetic resonance image processing based on deep learning Active CN115409743B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211365286.4A CN115409743B (en) 2022-11-03 2022-11-03 Model construction method for brain magnetic resonance image processing based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211365286.4A CN115409743B (en) 2022-11-03 2022-11-03 Model construction method for brain magnetic resonance image processing based on deep learning

Publications (2)

Publication Number Publication Date
CN115409743A CN115409743A (en) 2022-11-29
CN115409743B true CN115409743B (en) 2023-03-24

Family

ID=84169099

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211365286.4A Active CN115409743B (en) 2022-11-03 2022-11-03 Model construction method for brain magnetic resonance image processing based on deep learning

Country Status (1)

Country Link
CN (1) CN115409743B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116129235B (en) * 2023-04-14 2023-06-23 英瑞云医疗科技(烟台)有限公司 Cross-modal synthesis method for medical images from cerebral infarction CT to MRI conventional sequence

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111882503A (en) * 2020-08-04 2020-11-03 深圳高性能医疗器械国家研究院有限公司 Image noise reduction method and application thereof
CN112329871A (en) * 2020-11-11 2021-02-05 河北工业大学 Pulmonary nodule detection method based on self-correction convolution and channel attention mechanism
CN114724155A (en) * 2022-04-19 2022-07-08 湖北工业大学 Scene text detection method, system and equipment based on deep convolutional neural network

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112837274B (en) * 2021-01-13 2023-07-07 南京工业大学 Classification recognition method based on multi-mode multi-site data fusion
CN114757911B (en) * 2022-04-14 2023-04-07 电子科技大学 Magnetic resonance image auxiliary processing system based on graph neural network and contrast learning
CN115049629A (en) * 2022-06-27 2022-09-13 太原理工大学 Multi-mode brain hypergraph attention network classification method based on line graph expansion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111882503A (en) * 2020-08-04 2020-11-03 深圳高性能医疗器械国家研究院有限公司 Image noise reduction method and application thereof
CN112329871A (en) * 2020-11-11 2021-02-05 河北工业大学 Pulmonary nodule detection method based on self-correction convolution and channel attention mechanism
CN114724155A (en) * 2022-04-19 2022-07-08 湖北工业大学 Scene text detection method, system and equipment based on deep convolutional neural network

Also Published As

Publication number Publication date
CN115409743A (en) 2022-11-29

Similar Documents

Publication Publication Date Title
WO2023077603A1 (en) Prediction system, method and apparatus for abnormal brain connectivity, and readable storage medium
Li et al. Alzheimer's disease classification based on combination of multi-model convolutional networks
Gao et al. Task-induced pyramid and attention GAN for multimodal brain image imputation and classification in Alzheimer's disease
Cheng et al. CNNs based multi-modality classification for AD diagnosis
US20210232915A1 (en) Explainable neural net architecture for multidimensional data
CN111738363B (en) Alzheimer disease classification method based on improved 3D CNN network
Rahim et al. Prediction of Alzheimer's progression based on multimodal deep-learning-based fusion and visual explainability of time-series data
Pannu et al. Deep learning based image classification for intestinal hemorrhage
CN112674720A (en) Alzheimer disease pre-diagnosis method based on 3D convolutional neural network
Fouladi et al. The use of artificial neural networks to diagnose Alzheimer’s disease from brain images
CN115409743B (en) Model construction method for brain magnetic resonance image processing based on deep learning
CN114299006A (en) Self-adaptive multi-channel graph convolution network for joint graph comparison learning
CN115272295A (en) Dynamic brain function network analysis method and system based on time domain-space domain combined state
CN112863664A (en) Alzheimer disease classification method based on multi-modal hypergraph convolutional neural network
Mehmood et al. Early Diagnosis of Alzheimer's Disease Based on Convolutional Neural Networks.
Li et al. Leveraging deep learning and xception architecture for high-accuracy mri classification in alzheimer diagnosis
Dayananda et al. A squeeze U-SegNet architecture based on residual convolution for brain MRI segmentation
Dai et al. DE-JANet: A unified network based on dual encoder and joint attention for Alzheimer’s disease classification using multi-modal data
Ahmed et al. Early Detection of Alzheimer's Disease Based on Laplacian Re-Decomposition and XGBoosting.
Hanachi et al. Interpretation of Human Behavior from Multi-modal Brain MRI Images based on Graph Deep Neural Networks and Attention Mechanism.
Pallawi et al. Study of Alzheimer’s disease brain impairment and methods for its early diagnosis: a comprehensive survey
Liang et al. Multi-Scale Attention-Based Deep Neural Network for Brain Disease Diagnosis.
CN116310479B (en) Alzheimer's disease early identification system based on multi-center structure magnetic resonance image
CN116797817A (en) Autism disease prediction technology based on self-supervision graph convolution model
Vinutha et al. A convolution neural network based classifier for diagnosis of Alzheimer’s disease

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant