CN115311135A - 3 DCNN-based isotropic MRI resolution reconstruction method - Google Patents

3 DCNN-based isotropic MRI resolution reconstruction method Download PDF

Info

Publication number
CN115311135A
CN115311135A CN202210729637.9A CN202210729637A CN115311135A CN 115311135 A CN115311135 A CN 115311135A CN 202210729637 A CN202210729637 A CN 202210729637A CN 115311135 A CN115311135 A CN 115311135A
Authority
CN
China
Prior art keywords
reconstruction
image
dcnn
model
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210729637.9A
Other languages
Chinese (zh)
Inventor
赵小乐
张晓博
李天瑞
张涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202210729637.9A priority Critical patent/CN115311135A/en
Publication of CN115311135A publication Critical patent/CN115311135A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a 3 DCNN-based isotropic MRI resolution reconstruction method, which specifically comprises three contents of anisotropic scanning fusion based on 3D interpolation, feature extraction and image reconstruction based on a 3D convolutional neural network, and 3D feature fusion based on adaptive channel concatenation. The invention provides a method for reconstructing isotropic resolution of an MR image by adopting 3DCNN, simultaneously paying attention to the compatibility of actual deployment, fusing three orthogonal MR scans through linear interpolation, and then performing reasoning by using 3D CNN. To obtain a high resolution enhanced image, we also employ a wide activation and weighted channel concatenation technique. A large number of experiments show that the method can realize excellent isotropic resolution reconstruction. Due to the simple orthogonal scanning fusion strategy, the method can execute rapid reasoning and reconstruction under the condition of ensuring the reconstruction precision, thereby being more beneficial to actual deployment and application.

Description

3 DCNN-based isotropic MRI resolution reconstruction method
Technical Field
The invention relates to the technical field of MRI super-resolution reconstruction, in particular to an isotropic MRI resolution reconstruction method based on 3 DCNN.
Background
Magnetic Resonance Imaging (MRI) is a very important versatile Imaging modality in clinical diagnosis and image-guided therapy. In many MRI imaging experiments, one fundamental consideration is how to determine the effect of a balance between image spatial resolution, signal-to-noise ratio, and imaging time. To improve image signal-to-noise ratio and shorten imaging time, many MRI scanners acquire a smaller number of slices at a larger Slice Thickness (Slice Thickness). Therefore, most 3D Magnetic Resonance (MR) images have a higher Resolution in-plane and a lower Resolution in the cross-plane direction, i.e., anisotropic Resolution. This is disadvantageous for subsequent computer-aided diagnosis and other quantitative analysis, as more high frequency information is lost in the cross-plane direction. A straightforward way to acquire Isotropic Resolution (Isotropic Resolution) 3D MR images is by hardware devices, such as higher magnetic field strength, faster gradients, more sensitive radio frequency coils, etc. However, these solutions tend to be costly, requiring hardware upgrades but still subject to complications such as physical laws and system noise.
Another alternative method for achieving isotropic Resolution reconstruction is image post-processing, and a typical class of post-processing methods that can achieve this goal is called Super-Resolution (SR), which mainly aims at reconstructing a High-Resolution (HR) image of the same scene from one or more Low-Resolution (LR) images. Image super-resolution reconstruction, as a classic low-level computer vision problem, is an active and challenging research hotspot in the fields of natural image and medical image processing due to its serious ill-posed nature and high practicability. Early methods mainly included interpolation-based methods, reconstruction-based methods, and traditional machine learning-based methods. Since these methods only use limited additional information to solve the ill-conditioned inverse problem, and the model itself has limited expressive power, the reconstruction performance of these methods is often very limited.
Recently, with the continuous development and progress of Deep Learning (Deep Learning) technology, especially the continuous prevalence of Convolutional Neural Networks (CNNs) in the field of computer vision, the image SR method based on CNN is emerging continuously, and the performance record in the SR field is refreshed continuously. For natural images, an original work is SRCNN, which is an end-to-end CNN model with only three layers, and three processes of feature extraction, model reasoning and image reconstruction are integrated into an integral framework, so that synergy is realized. Subsequent work is mostly carried out on the basis of the SRCNN model from two aspects of improving the model performance and reducing the model parameters, such as VDSR, EDSR, memNet and the like. In the field of medical images, some related works are proposed, including CSN, mDCSRN, and FSCWRN, among others. A closer work to the present invention is likely to be literature, but the work is more focused on processing super-resolution reconstruction in a general sense with 3D CNN than the isotropic resolution reconstruction problem addressed by the present invention.
In the prior art, the problems are also solved by adopting a traditional machine learning method and a deep learning method, and the traditional machine learning method usually adopts an iterative optimization method in a test stage and has low calculation efficiency. Meanwhile, the traditional machine learning methods such as sparse expression and sample learning essentially belong to shallow learning technology, and the expression capability of the model is very limited, so the performance of the algorithm is often unsatisfactory. On the other hand, the traditional machine learning method combines complex sub-processes of dictionary learning, wavelet fusion, self-similarity learning and the like, so that the traditional machine learning method is not beneficial to practical application and deployment. Although the deep learning method can alleviate the defects of the traditional machine learning to a certain extent, the network structure of the deep learning method is constructed by adopting simple convolution and residual connection, so that the model expression capability is not strong, and the reconstruction accuracy is poor.
Disclosure of Invention
Aiming at the isotropic resolution reconstruction problem of MRI imaging, the invention adopts a 3D convolutional neural network to realize high-efficiency accurate reconstruction, specifically comprises anisotropic scanning fusion based on 3D interpolation, feature extraction and image reconstruction based on the 3D convolutional neural network, 3D feature fusion based on adaptive channel concatenation and the like, effectively improves the balance effect between the reconstruction effect and the reconstruction efficiency, fully considers the execution efficiency and the actual deployment potential of a model, and solves the problems mentioned in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme: a3 DCNN-based isotropic MRI resolution reconstruction method comprises the following steps:
firstly, fusing orthogonal MRI scanning into an initial isotropic resolution image x based on linear interpolation;
step two, constructing a 3D CNN network model;
step three, initializing model parameters and adopting L 1 A loss function training model, wherein the model is stored after training is finished;
and step four, inputting the initial image x as the input of the 3D CNN network into the trained 3D CNN model to execute SR inference, and finishing isotropic MRI resolution reconstruction.
Further, the first step specifically includes: three orthogonal MRI scans are upsampled to an HR image space by adopting linear interpolation to obtain three upsampled input images with the same shape and size, and then a fused initial isotropic resolution image x is obtained by element-level averaging, wherein the formula is expressed as follows:
Figure BDA0003712595990000031
wherein x is v Input representing anisotropic resolution, U v An upsampling operation in the v-th direction, i.e. a linear interpolation,
Figure BDA0003712595990000032
representing the initial image after sampling in the v-th direction.
Further, the 3D CNN network model comprises three parts of shallow feature extraction, nonlinear reasoning and SR reconstruction.
Further, the shallow feature extraction is realized by adopting a 3D convolutional layer, and the obtained shallow feature uses x 0 Representing; the nonlinear reasoning is composed of a group of stacked 3D channel concatenation modules 3D CB, and the output of each 3D CB module is represented as x i (i = 1.. Multidot., n), wherein n represents the number of 3D CB modules; the SR reconstruction is done with two 3 × 3 × 3 convolutional layers.
Furthermore, in the nonlinear reasoning part, 3D CB modules are adopted to extract deep features, each 3DCB module consists of two branches of identity mapping and nonlinear mapping, wherein the nonlinear mapping branch comprises two 3 multiplied by 3 convolutional layers and a ReLU nonlinear activation layer, and in each 3D CB module, a self-adaptive 3D weighted channel concatenation technology is adopted; let the input of the ith 3D CB module be x i-1 The output is x i And the mapping relation of the 3D CB module is expressed as:
Figure BDA0003712595990000041
wherein
Figure BDA0003712595990000042
3D convolutional layer representing 1 × 1 × 1 of 3D CB module tail
Figure BDA0003712595990000043
Representing a non-linear mapping branch, π, of a 3D CB module i And λ i Representing the weight coefficients on the branches of the identity mapping and the non-linear mapping.
Further, in the two 3 × 3 × 3 convolutional layers of the nonlinear mapping branch, a 3D wide activation strategy is adopted to expand the output channel of the first 3 × 3 × 3 convolutional layer by a times, and the second convolutional layer recovers the original channel number.
Further, in the third step, L is adopted 1 The loss function training model specifically includes: given a data set D containing | D | training samples
Figure BDA0003712595990000044
L 1 The loss function is expressed as follows:
Figure BDA0003712595990000045
wherein the content of the first and second substances,
Figure BDA0003712595990000046
a mapping function representing the whole model, i | · | non-calculation 1 Represents L 1 Norm, i.e. the absolute difference in the element level between matrices or vectors.
The invention has the beneficial effects that:
1) The invention provides a method for reconstructing isotropic resolution of an MR image by adopting 3D CNN, simultaneously paying attention to the compatibility of actual deployment, fusing three orthogonal MR scans through linear interpolation, and then performing reasoning by using the 3D CNN. To obtain a high resolution enhanced image, we also employ a wide activation and weighted channel concatenation technique. A large number of experiments show that the method can realize excellent isotropic resolution reconstruction. Due to the simple orthogonal scanning fusion strategy, the method can execute rapid reasoning and reconstruction under the condition of ensuring reconstruction accuracy, thereby being more beneficial to actual deployment and application.
2) The isotropic resolution reconstruction surrounding the MR image has the main aims of simple implementation, high-efficiency execution and easy deployment, and designs anisotropic resolution image fusion based on linear interpolation and a reconstruction model based on a 3D CNN network, so that the model reconstruction effect is improved while the algorithm is kept simple and efficient, and the compromise between the model operation efficiency and the reconstruction effect is improved.
Drawings
FIG. 1 is a schematic diagram of fusing an initial isotropic resolution image using linear interpolation;
FIG. 2 is a diagram of the overall network architecture for 3D CNN-based isotropic resolution reconstruction in accordance with the present invention;
FIG. 3 is a schematic diagram of a wide activation strategy in an adaptive channel concatenation module;
fig. 4 is a visual comparison diagram of reconstruction results of different models when performing SR × 7 reconstruction on Set7, from top to bottom: sagittal plane view, axial plane view and coronal plane view;
fig. 5 is a visual graph of residuals between SR reconstruction results and Ground Truth of each model, from top to bottom: sagittal plane view, axial plane view, and coronal plane view.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
Referring to fig. 1-5, the present invention mainly includes two parts: orthogonal MR scanning fusion based on linear interpolation and SR inference based on 3D convolution neural network are as follows: a3 DCNN-based isotropic MRI resolution reconstruction method comprises the following steps:
firstly, fusing orthogonal MRI scanning into an initial isotropic resolution image x based on linear interpolation;
step two, constructing a 3D CNN network model;
step three, initializing model parameters and adopting L 1 A loss function training model, wherein the model is stored after training is finished;
and step four, inputting the initial image x as the input of the 3D CNN network into the trained 3D CNN model to execute SR inference, and finishing isotropic MRI resolution reconstruction.
The SR inference based on 3D convolutional neural network includes two parts of adaptive 3D weighted channel concatenation and 3D wide activation, which we shall explain separately next.
Orthogonal MR scan fusion based on linear interpolation
To simplify the pre-processing procedure, we upsample three orthogonal MRI scans (axial, coronal, and sagittal) into the HR image space using simple linear interpolation, resulting in three upsampled input images of the same shape and size, as shown in fig. 1. And finally, obtaining the input of the 3DCNN through element-level averaging. Formally, the process can be described as follows:
Figure BDA0003712595990000061
wherein x is v Input representing anisotropic resolution, U v An upsampling operation in the v-th direction, i.e. a linear interpolation,
Figure BDA0003712595990000062
representing the initial image after sampling in the v-th direction. Due to the fact that
Figure BDA0003712595990000063
Consistent in shape, so element-level averaging can be performed according to the above formula. x represents the initial image obtained after fusion. The above described fusion of multiple orthogonal MR scans into a single MR image consists of only simple linear interpolation and element-level averaging, which is easy to implement in practical deployments.
SR inference based on 3D convolutional neural network
After the fused initial image x is obtained, we can input it into the 3D CNN model to perform SR inference. The structure of the whole inference network is shown in fig. 2, and like many other image SR models based on CNN, the 3D CNN network of the present invention also includes three parts, namely shallow feature extraction, nonlinear inference and SR reconstruction. Wherein, shallow feature extraction is realized by using a 3D convolutional layer (the obtained shallow feature uses x 0 Represents); the nonlinear reasoning consists of a set of stacked 3D Channel concatenation blocks (3D Channel collocation, 3D CB) (the output of each 3D CB block is denoted x i (i = 1...., n), where n represents the number of 3D CB modules); while SR reconstruction is done by two 3 x 3 convolutional layers as shown in figure 2.
The network structure of the present invention is shown in fig. 2, and the magnification of wide activation in our experiment a =4. In addition, we set the number of 3D CB modules n =16 in fig. 2, and the number of channels C =32 of the intermediate layer feature map. For learnable weight parameter pi i And λ i They are all initialized to 1.0 and then automatically adjusted by model training. The model optimization adopts Adam algorithm, and parameters of the Adam algorithm are respectively set as beta 1 =0.9,β 2 =0.999 and ∈ =10 -8 . Learning rate is initialized to 2 × 10 -4 Every 10 ten thousand iterations is halved for a total of 40 ten thousand iterations. In the training stage, an LR image block of 24 × 24 × 24 is extracted from an initial image x fused based on linear interpolation, and a corresponding HR image block is sampled from an HR image, wherein the sizes of the LR image block and the HR image block are kept consistent. In addition, we perform the enhancement of the training data by flipping the 3D image block in three dimensions, front-back, left-right, top-bottom.
From the perspective of image reconstruction, relevant studies indicate L 1 Loss of this conventional L 2 The losses can further promote model convergence. So we also use L 1 The loss function is used to train the model. Given a data set D containing | D | training samples
Figure BDA0003712595990000071
Then typically L 1 The loss function is defined as follows:
Figure BDA0003712595990000072
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003712595990000073
representing the mapping function corresponding to the whole model, | ·| non-woven phosphor 1 Represents L 1 Norm, i.e. the absolute difference in the element level between matrices or vectors.
Adaptive 3D weighted channel concatenation
In the nonlinear reasoning stage, a 3D CB module is designed to extract deep features, and in each 3D CB module, a 3D CB module is adoptedAdaptive 3D weighted channel concatenation techniques. Each 3D CB module consists of two branches of identity mapping and nonlinear mapping, where the nonlinear mapping branch includes two convolutional layers and one nonlinear active layer. Let the input of the ith 3D CB module be x i-1 The output is x i Then the mapping relationship (channel weighting) of the 3D CB module can be expressed as:
Figure BDA0003712595990000081
wherein
Figure BDA0003712595990000082
A 1 x 1 3D convolutional layer representing the tail of a 3D CB module, as shown in fig. 2.
Figure BDA0003712595990000083
Representing a non-linear mapping branch, π, of a 3D CB module i And λ i Representing the weight coefficients on the branches of the identity mapping and the non-linear mapping. Due to pi i And λ i Are learnable parameters (i.e. adaptive), so the expression capability of the whole model is obviously improved.
3D Wide activation policy
As shown in fig. 2, the SR inference network of the present invention mainly comprises a set of stacked 3D CB modules, and each 3D CB module includes two branches: an identity mapping branch and a non-linear mapping branch. In the nonlinear mapping branch, we set two 3 × 3 × 3 convolution layers, sandwiching a ReLU activation function layer. Since the task of isotropic resolution reconstruction of MR images is a typical low-level computer vision problem, increasing the number of channels at the output of the first convolutional layer will facilitate forward transmission of low-level visual features. To do this, we use a 3D version of the wide activation strategy to expand the output channels of the first 3 x 3 convolutional layer by a times and recover the original number of channels from the second convolutional layer, the whole process is shown in fig. 3.
The isotropic resolution reconstruction surrounding the MR image has the main aims of simple implementation, high-efficiency execution and easy deployment, and designs the anisotropic resolution image fusion based on linear interpolation and the reconstruction model based on the 3DCNN network, thereby improving the model reconstruction effect and the compromise between the model operation efficiency and the reconstruction effect while keeping the algorithm simple and high-efficiency.
Example 2
To illustrate the technical effect of the present invention, we randomly selected 155 pairs of structural MR images from the HCP data set, including both T1 and T2 types of images. The spatial resolution of these images was 0.7mm x 0.7mm, with image sizes of 260 x 311 x 260 we divided them into 100 training images, 50 test images and 5 validation images. In addition, we also collected a real collected data Set7 for testing, which includes 7T 1 test images.
Table 1 shows the quantitative comparison results of the method (isoSn Super-Resolution Network, isoSN RN) of the present invention with the other three methods under different slice thicknesses (scaling factors). Wherein CubeAvg represents that only simple linear interpolation fusion is performed without the 3 DCNN-based SR reconstruction process; NLM represents a Nonlocal Mean, which is a traditional SR reconstruction method; SRCNN3D is a 3D version of the SRCNN model. As can be seen from table 1, the isossrn model proposed by the present invention greatly surpasses these three comparison methods in all scales. For example, compared to the CubeAvg method, the PSNR value of isoSRN performing 2-fold isotropic resolution reconstruction on T1 image is improved by 11.58dB, while on T2 image an improvement of 14.01dB is obtained. In addition, we can see that the PSNR/SSIM values of all algorithms show a gradually decreasing trend as the slice thickness increases. Even so, the isoSRN achieves a performance boost of 7.50dB/0.0844 when performing SR 7 reconstruction on the T2 image. This shows that the performance of the preliminary result obtained based on linear interpolation can be significantly improved by using 3 DCNN. The SRCNN3D also follows the basic flow of the invention, and the 3DCNN adopts the model structure of the SRCNN, so the results in the table 1 also show that the 3D CB module has better expression capability, and the inference network composed of the 3D CB module has higher reconstruction accuracy.
TABLE 1 comparison of the Performance of different models on the HCP Test data set
Figure BDA0003712595990000091
In table 1 (including 6 scale factors from 2 to 7). The maxima in each comparison grid are shown in bold, and the quantitative evaluation indices are PSNR (dB) and SSIM, respectively.
To more fully compare with other advanced methods and also verify the generalization ability of the proposed method on real data, we add two comparison models: recnn and VDSR3D. The former is one of deep learning, which consists of 10 convolutional layers. The latter is a 3D version of the VDSR model, similar to the SRCNN3D, using the same procedure as the present invention, as shown in table 2.
TABLE 2 comparison of Performance of different models on Set7 dataset
Figure BDA0003712595990000101
Table 2 (including 6 scale factors from 2 to 7). The maximum in each comparison grid is shown in bold, and the quantitative evaluation indices are PSNR (dB) and SSIM, respectively. As can be seen from table 2, although Set7 was derived from different hardware devices, subjects, imaging parameters, and imaging environments, our isossrn model still performed best at all scales and performance increased significantly. Note that we have no other two traditional machine learning methods here, mainly because they perform worse than recann, and these two methods do not provide reference codes and are difficult to reproduce.
Fig. 4 shows the visual contrast of these comparison methods when a test image in Set7 is subjected to the SR × 7 reconstruction, from top to bottom: sagittal plane view, axial plane view, and coronal plane view. We can clearly observe the performance advantage of the deep learning approach over the traditional approach. In addition, the isoSRRN model with the best performance in four deep learning methods of SRCNN3D, recNN, VDSR3D and isoSRN is the isoSRN model of the invention, the edge and the outline in the reconstruction are clearer, and the texture details are closer to the actual situation of Ground Truth. In order to more fully show the performance advantages of the invention, the reconstruction error between the reconstruction result of the algorithms and the Ground Truth is visualized, and the method comprises the following steps: sagittal plane view, axial plane view and coronal plane view, as shown in fig. 5. It can be seen that the reconstruction result of the isoSRN has the smallest reconstruction error, which indicates that the result is closest to the group Truth. Note that one of FIGS. 4 and 5 contains a set of results named isoSRN +, which are enhanced versions of the method of the present invention using a Geometric Self-integration (Geometric Self-Ensemble) strategy. However, the isoSRN per se or the enhanced isoSRN + can obtain the best SR effect in the comparison method. The invention is subsidized by a national science foundation youth scientific foundation project 62102330, a Sichuan province science foundation youth scientific foundation project 2022NSFSC0947 and a central college basic scientific research business fee special fund 2682021CX 040.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that various changes in the embodiments and/or modifications of the invention can be made, and equivalents and modifications of some features of the invention can be made without departing from the spirit and scope of the invention.

Claims (7)

1. A3 DCNN-based isotropic MRI resolution reconstruction method is characterized by comprising the following steps:
firstly, fusing orthogonal MRI scanning into an initial isotropic resolution image x based on linear interpolation;
step two, constructing a 3D CNN network model;
step three, initializing model parameters and adopting L 1 A loss function training model, wherein the model is stored after training is finished;
and step four, inputting the initial image x as the input of the 3D CNN network into the trained 3D CNN model to execute SR inference to complete isotropic MRI resolution reconstruction.
2. The 3 DCNN-based isotropic MRI resolution reconstruction method of claim 1, wherein: the first step specifically comprises: three orthogonal MRI scans are upsampled to an HR image space by adopting linear interpolation to obtain three upsampled input images with the same shape and size, and then a fused initial isotropic resolution image x is obtained by element-level averaging, wherein the formula is expressed as follows:
Figure FDA0003712595980000011
wherein x is v Input representing anisotropic resolution, U v An upsampling operation in the v-th direction, i.e. a linear interpolation,
Figure FDA0003712595980000012
representing the initial image after sampling in the v-th direction.
3. The 3 DCNN-based isotropic MRI resolution reconstruction method of claim 1, wherein: the 3D CNN network model comprises three parts of shallow feature extraction, nonlinear reasoning and SR reconstruction.
4. The 3 DCNN-based isotropic MRI resolution reconstruction method of claim 3, wherein: the shallow feature extraction is realized by adopting a 3D convolution layer, and the obtained shallow feature uses x 0 Represents; the nonlinear reasoning is composed of a group of stacked 3D channel concatenation modules 3D CB, and the output of each 3D CB module is represented as x i (i = 1.. ·, n), wherein n represents the number of 3D CB modules; the SR reconstruction is done with two 3 × 3 × 3 convolutional layers.
5. The 3 DCNN-based isotropic MRI resolution reconstruction method of claim 4, wherein: in the nonlinear reasoning part, a 3D CB module is adoptedExtracting deep features by blocks, wherein each 3D CB module consists of two branches of identity mapping and nonlinear mapping, each nonlinear mapping branch comprises two 3 x 3 convolutional layers and a ReLU nonlinear activation layer, and each 3D CB module adopts a self-adaptive 3D weighted channel concatenation technology; let the input of the ith 3D CB module be x i-1 The output is x i The mapping relationship of the 3D CB module is expressed as:
Figure FDA0003712595980000021
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003712595980000022
represents a 1 × 1 × 1 3D convolution layer at the tail of the 3DCB module,
Figure FDA0003712595980000023
representing a non-linear mapping branch, π, of a 3D CB module i And λ i Representing the weight coefficients on the branches of the identity mapping and the non-linear mapping.
6. The 3 DCNN-based isotropic MRI resolution reconstruction method of claim 5, wherein: in the two 3 × 3 × 3 convolutional layers of the nonlinear mapping branch, a 3D wide activation strategy is adopted to expand the output channel of the first 3 × 3 × 3 convolutional layer by a times, and the number of channels is restored from the second convolutional layer to the original number.
7. The 3 DCNN-based isotropic MRI resolution reconstruction method of claim 1, wherein: in the third step, L is adopted 1 The loss function training model specifically includes: given a data set D containing | D | training samples
Figure FDA0003712595980000024
L 1 The loss function is expressed as follows:
Figure FDA0003712595980000025
wherein the content of the first and second substances,
Figure FDA0003712595980000026
a mapping function representing the whole model, i | · | non-calculation 1 Represents L 1 Norm, i.e. the absolute difference in the element level between matrices or vectors.
CN202210729637.9A 2022-06-24 2022-06-24 3 DCNN-based isotropic MRI resolution reconstruction method Pending CN115311135A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210729637.9A CN115311135A (en) 2022-06-24 2022-06-24 3 DCNN-based isotropic MRI resolution reconstruction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210729637.9A CN115311135A (en) 2022-06-24 2022-06-24 3 DCNN-based isotropic MRI resolution reconstruction method

Publications (1)

Publication Number Publication Date
CN115311135A true CN115311135A (en) 2022-11-08

Family

ID=83855127

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210729637.9A Pending CN115311135A (en) 2022-06-24 2022-06-24 3 DCNN-based isotropic MRI resolution reconstruction method

Country Status (1)

Country Link
CN (1) CN115311135A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117974448A (en) * 2024-04-02 2024-05-03 中国科学院自动化研究所 Three-dimensional medical image isotropy super-resolution method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111507462A (en) * 2020-04-15 2020-08-07 华中科技大学鄂州工业技术研究院 End-to-end three-dimensional medical image super-resolution reconstruction method and system
CN112837224A (en) * 2021-03-30 2021-05-25 哈尔滨理工大学 Super-resolution image reconstruction method based on convolutional neural network
CN113256772A (en) * 2021-05-10 2021-08-13 华中科技大学 Double-angle light field high-resolution reconstruction system and method based on visual angle conversion
CN114503159A (en) * 2019-08-14 2022-05-13 豪夫迈·罗氏有限公司 Three-dimensional object segmentation of medical images localized by object detection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114503159A (en) * 2019-08-14 2022-05-13 豪夫迈·罗氏有限公司 Three-dimensional object segmentation of medical images localized by object detection
CN111507462A (en) * 2020-04-15 2020-08-07 华中科技大学鄂州工业技术研究院 End-to-end three-dimensional medical image super-resolution reconstruction method and system
CN112837224A (en) * 2021-03-30 2021-05-25 哈尔滨理工大学 Super-resolution image reconstruction method based on convolutional neural network
CN113256772A (en) * 2021-05-10 2021-08-13 华中科技大学 Double-angle light field high-resolution reconstruction system and method based on visual angle conversion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIAOLE ZHAO: "Isotropic MRI Reconstruction with 3D Convolutional Neural Network", 《ISMRM 2020》 *
赵小乐: "图像超分辨技术及其在MRI中的应用研究", 《中国博士学位论文全文数据库 (信息科技辑)》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117974448A (en) * 2024-04-02 2024-05-03 中国科学院自动化研究所 Three-dimensional medical image isotropy super-resolution method and device

Similar Documents

Publication Publication Date Title
Qiu et al. Multiple improved residual networks for medical image super-resolution
CN107610194B (en) Magnetic resonance image super-resolution reconstruction method based on multi-scale fusion CNN
CN111932460B (en) MR image super-resolution reconstruction method, device, computer equipment and storage medium
CN111047515A (en) Cavity convolution neural network image super-resolution reconstruction method based on attention mechanism
CN109214989B (en) Single image super resolution ratio reconstruction method based on Orientation Features prediction priori
Li et al. Deep learning methods in real-time image super-resolution: a survey
Du et al. Accelerated super-resolution MR image reconstruction via a 3D densely connected deep convolutional neural network
CN111626927B (en) Binocular image super-resolution method, system and device adopting parallax constraint
CN111487573B (en) Enhanced residual error cascade network model for magnetic resonance undersampling imaging
CN111899165A (en) Multi-task image reconstruction convolution network model based on functional module
Wei et al. Channel rearrangement multi-branch network for image super-resolution
CN115311135A (en) 3 DCNN-based isotropic MRI resolution reconstruction method
Fan et al. An interpretable MRI reconstruction network with two-grid-cycle correction and geometric prior distillation
Pham Deep learning for medical image super resolution and segmentation
CN116071239B (en) CT image super-resolution method and device based on mixed attention model
CN117333750A (en) Spatial registration and local global multi-scale multi-modal medical image fusion method
WO2024021796A1 (en) Image processing method and apparatus, electronic device, storage medium, and program product
Lepcha et al. An efficient medical image super resolution based on piecewise linear regression strategy using domain transform filtering
CN116681592A (en) Image super-resolution method based on multi-scale self-adaptive non-local attention network
CN116630178A (en) U-Net-based power frequency artifact suppression method for ultra-low field magnetic resonance image
CN116152060A (en) Double-feature fusion guided depth image super-resolution reconstruction method
CN110895790A (en) Scene image super-resolution method based on posterior degradation information estimation
CN110322528A (en) Nuclear magnetic resonance brain image reconstructing blood vessel method based on 3T, 7T
Qin et al. A2OURSR: Adaptive adjustment based real MRI super-resolution via opinion-unaware measurements
CN114140404A (en) Lung multi-core MRI (magnetic resonance imaging) double-domain super-resolution reconstruction method based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20221108

RJ01 Rejection of invention patent application after publication