CN113744132A - MR image depth network super-resolution method based on multiple optimization - Google Patents

MR image depth network super-resolution method based on multiple optimization Download PDF

Info

Publication number
CN113744132A
CN113744132A CN202111055372.0A CN202111055372A CN113744132A CN 113744132 A CN113744132 A CN 113744132A CN 202111055372 A CN202111055372 A CN 202111055372A CN 113744132 A CN113744132 A CN 113744132A
Authority
CN
China
Prior art keywords
resolution
super
image
network
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111055372.0A
Other languages
Chinese (zh)
Inventor
刘环宇
李君宝
罗庆
邵明媚
杨�一
董博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202111055372.0A priority Critical patent/CN113744132A/en
Publication of CN113744132A publication Critical patent/CN113744132A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The MR image depth network super-resolution method based on multiple optimization solves the problem that the complexity of a model and the training difficulty are difficult to obtain better balance when a super-resolution MR image is reconstructed by using a depth learning network in the prior art, and belongs to the technical field of image super-resolution reconstruction. The invention comprises the following steps: acquiring an MR image training set which comprises a multi-slice low-resolution MR image set and a corresponding multi-slice high-resolution MR image set; training a super-resolution deep learning network by using an MR image training set, wherein the super-resolution deep learning network comprises a fusion layer and a plurality of super-resolution networks, the input of each super-resolution network is a low-resolution MR image, the output of each super-resolution network is simultaneously input to the fusion layer, and the fusion layer outputs a high-resolution MR image; and reconstructing the low-resolution MR image into a high-resolution MR image by using the trained super-resolution deep learning network. The super-resolution network and the fusion layer can adopt the cascade of different loss functions during training.

Description

MR image depth network super-resolution method based on multiple optimization
Technical Field
The invention belongs to the technical field of image super-resolution reconstruction.
Background
The MR image can discover finer and tiny lesion tissues at an early stage, so that the MR image has great significance in the aspects of lesion positioning and disease diagnosis, and the high-resolution MR image can provide clearer structural details and is helpful for doctors to correctly analyze and diagnose the disease. It is therefore more desirable to acquire high resolution MR images in clinical applications and scientific research. However, the acquisition of high resolution MR images requires stronger magnetic field strength, longer radiation scan time, and thus the imaging time is long and noise is generated due to the movement of the human body during the imaging. The improvement of the resolution of the MR image is at the cost of doubling the imaging time, so that the experience of the patient is greatly reduced, and psychological pressure is easily brought to the patient. Compared with a hardware method for improving the resolution of the MR image by upgrading the imaging equipment, the method for improving the resolution of the MR image by adopting software is an economical and efficient method. The method has the advantages that the resolution ratio of the MR image is improved by adopting a dictionary learning method, compared with the traditional interpolation methods such as biquadratic interpolation and nearest neighbor interpolation, and reconstruction-based methods such as a convex set projection method and a maximum posterior probability method, the method has strong feature expression capability, can well represent the mapping relation of the MR image with high and low resolution ratios, and respectively improves the super-resolution performance of the MR image from two aspects of dictionary construction parameter optimization and training sample quality optimization. Although the dictionary learning method can introduce prior information of images from a trained data set, the shallow model expression capability of the dictionary learning method is limited, and the prior information capability of the learning images is limited.
In recent years, with the rapid development of deep learning technology, a super-resolution method based on deep learning is actively explored, and the development of image super-resolution technology is greatly promoted. In 2014, Dong et al proposed an SRCNN model, and applied the deep learning method to the image super-resolution field for the first time. Since the study of the network architecture, the loss function type, the learning principle and the strategy is respectively carried out by the scholars, a great number of image super-resolution methods based on the deep convolutional network are continuously proposed, and the super-resolution performance is continuously refreshed, such as VDSR, RDN, SRGAN, DBPN, SRFBN and the like, so that the most advanced performance is achieved on various standards of the image super-resolution. However, these deep learning networks are mainly aimed at natural image super-resolution tasks, not medical images or MR images. In this regard, medical image researchers also pay attention to the development of deep super-resolution, and migrate the deep learning technique to the MR image, emerging a set of MR image super-resolution methods based on deep learning. However, the application of the current deep learning technology to the super resolution of MR images still has some problems: the characteristic characterization capability of the deep learning network is gradually enhanced along with the deepening of the network, but the training difficulty of the network is multiplied along with the huge network structure, so that the model complexity and the training difficulty are difficult to obtain better balance.
Disclosure of Invention
Aiming at the problem that the existing super-resolution MR image reconstruction by utilizing the deep learning network is difficult to obtain better balance between model complexity and training difficulty, the invention provides the MR image deep network super-resolution method based on the multiple optimization, which widens the width and the depth of the network.
The invention discloses a super-resolution method of an MR image depth network, which comprises the following steps:
s1, acquiring an MR image training set, including a low-resolution MR image set and a corresponding high-resolution MR image set;
s2, training the super-resolution deep learning network by using an MR image training set, wherein a low-resolution MR image is used as input, and a high-resolution MR image is used as output;
the super-resolution deep learning network comprises a fusion layer and a plurality of super-resolution networks, wherein the input of each super-resolution network is a low-resolution MR image, the output of each super-resolution network is simultaneously input to the fusion layer, and the fusion layer outputs a high-resolution MR image;
and S3, reconstructing the low-resolution MR image into a high-resolution MR image by using the trained super-resolution deep learning network.
Preferably, the S2 includes:
respectively training each super-resolution network by using an MR image training set, optimizing parameters of each super-resolution network and finishing the training of each super-resolution network;
and taking the low-resolution MR image as input, respectively acquiring the super-resolution results of each super-resolution network after the training, taking the super-resolution results of each super-resolution network as the input of the fusion layer, training the fusion layer, optimizing the parameters of the fusion layer, and finishing the training of the fusion layer.
Preferably, in S2, the super-resolution network and the fusion layer may use a cascade of different loss functions during training.
Preferably, in S2, the training method for each super resolution network/fusion layer is:
reconstructing a low-resolution MR image in an MR image training set into a high-resolution MR image by utilizing a super-resolution network/fusion layer, calculating the error between the high-resolution MR image and the corresponding high-resolution MR image in the MR image training set, wherein the loss function in error calculation is the cascade of the loss functions of the super-resolution networks, and updating the parameters of the super-resolution network/fusion layer according to the calculated error.
Preferably, when the loss functions of the respective super-resolution networks are cascaded, different weights may be assigned to the loss functions of the respective super-resolution networks.
Preferably, in S2, an Adam optimizer is used to optimize the calculated error, and the optimized parameters of the super-resolution network/fusion layer are obtained.
Preferably, the input of the super-resolution network and the fusion layer are both multi-slice low-resolution images of the MR images, and the output is both corresponding multi-slice high-resolution MR images.
Preferably, the super-resolution network is composed of one independent network or a cascade of a plurality of independent networks.
Preferably, the super-resolution deep learning network comprises 2 super-resolution networks, wherein one super-resolution network is an ESRFBN network model, and the other super-resolution network is an edrr network model.
Preferably, the loss function of the super-resolution network during training is as follows:
Figure BDA0003254412030000031
LMS-SSIMrepresenting a multi-level structure similarity loss function,
Figure BDA0003254412030000032
the L1 loss function is represented,
Figure BDA0003254412030000033
is a gaussian coefficient, α is 0.84.
The invention has the beneficial effects that the invention provides a heterogeneous network fusion-based method, aiming at a plurality of independent deep super-resolution networks, the output of the plurality of super-resolution networks is integrated through an additional fusion layer. The method essentially widens the width and depth of the network, and can effectively improve the mapping representation capability of the low-resolution features, thereby obtaining a better MR image super-resolution effect. In addition, the invention provides a multi-slice input strategy to provide more structural information for the network. The MR image is a sequence image, the adjacent slices have similar structures, 2.5D information can be provided for a network, and compared with 2D, the MR image can obtain better effect, and has fewer operation parameters and higher efficiency compared with 3D convolution.
On the basis of analyzing the commonly used loss function at present, the invention provides a multi-loss function cascade optimization method, and the color and brightness characteristics of the L1 loss function and the MS-SSIM loss function are combined to better reserve the contrast characteristic of a high-frequency region, so that the depth model has better representation performance.
Drawings
FIG. 1 is a schematic diagram of the principle of the MR image depth network super-resolution method based on multiple optimization.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
The MR image depth network super-resolution method of the present embodiment includes:
acquiring an MR image training set, wherein the MR image training set comprises a low-resolution MR image set and a corresponding high-resolution MR image set;
training the super-resolution deep learning network by using an MR image training set, wherein a low-resolution MR image is used as input, and a high-resolution MR image is used as output;
as shown in fig. 1, the super-resolution deep learning network of the present embodiment includes a fusion layer and a plurality of super-resolution networks, an input of each super-resolution network is a low-resolution MR image, an output of each super-resolution network is simultaneously input to the fusion layer, and the fusion layer outputs a high-resolution MR image;
and step three, reconstructing the low-resolution MR image into a high-resolution MR image by using the trained super-resolution deep learning network.
The super-resolution deep learning network comprises a plurality of super-resolution networks, the depth can be increased through the super-resolution networks, the network width is widened by increasing the number of the super-resolution networks, and the output of the super-resolution networks is integrated through an additional fusion layer. The method essentially widens the width and depth of the network, and can effectively improve the mapping representation capability of the low-resolution features, thereby obtaining a better MR image super-resolution effect. Although the structural complexity of the deep learning network is increased for the effect of reconstructing the super-resolution MR image, the training difficulty is not increased when a plurality of super-resolution networks are independently trained, so that the model complexity and the training difficulty are well balanced.
In a preferred embodiment, the second step of the present embodiment includes:
respectively training each super-resolution network by using an MR image training set, optimizing parameters of each super-resolution network and finishing the training of each super-resolution network;
and taking the low-resolution MR image as input, respectively acquiring the super-resolution results of each super-resolution network after the training, taking the super-resolution results of each super-resolution network as the input of the fusion layer, training the fusion layer, optimizing the parameters of the fusion layer, and finishing the training of the fusion layer.
When the embodiment is used for training, each super-resolution network is trained firstly, and after the training is finished, the output of the super-resolution network is used for training the fusion layer, so that the training mode is simple.
At present, most of depth networks adopt L1 or L2 loss functions, the loss functions are based on pixel-by-pixel comparison difference, human visual perception is not considered, meanwhile, the loss functions are easy to fall into local optimal solutions, and the optimal effect is difficult to obtain. In a preferred embodiment, the super-resolution network and the fusion layer in step two of the present embodiment may use a cascade of different loss functions during training.
On the basis of analyzing the loss function commonly used at present, the embodiment provides a method for cascade optimization of multiple loss functions, for example: the color and brightness characteristics are kept by combining the L1 loss function and the MS-SSIM loss function, so that the contrast characteristics of a high-frequency region are better kept, and the depth model has better characterization performance.
In the present embodiment, as shown in fig. 1, in the second step of the present embodiment, training of each super-resolution network/fusion layer is performed:
reconstructing a low-resolution MR image in an MR image training set into a high-resolution MR image by utilizing a super-resolution network/fusion layer, calculating the error between the high-resolution MR image and the corresponding high-resolution MR image in the MR image training set, wherein the loss function in error calculation is the cascade of the loss functions of the super-resolution networks, and updating the parameters of the super-resolution network/fusion layer according to the calculated error. When the loss functions of the super-resolution networks are cascaded, different weights can be configured for the loss functions of the super-resolution networks. And optimizing the calculated error by adopting an Adam optimizer to obtain the optimized parameters of the super-resolution network/fusion layer.
The MR image is different from a natural image imaging mode, and a series of images with similar structures can be acquired in the MR image imaging process. Thus, MR sequence images can provide richer structural information and make the network more robust to noise than natural images. In the embodiment, the input of the super-resolution network and the input of the fusion layer are both multi-slice low-resolution images of the MR images, and the output of the super-resolution network and the fusion layer are both corresponding multi-slice high-resolution MR images. The multi-slice input strategy of the present embodiment provides more structural information for the network. The MR image is a sequence image, the adjacent slices have similar structures, 2.5D information can be provided for a network, and compared with 2D, the MR image can obtain better effect, and has fewer operation parameters and higher efficiency compared with 3D convolution.
The super-resolution network of the present embodiment is composed of one independent network or a cascade of a plurality of independent networks, for example, in fig. 1, the first super-resolution network is a cascade of two independent networks, and the second super-resolution network is a single independent network.
The specific embodiment is as follows: the embodiment also provides a multi-optimization-based MR image ESRFBN super-resolution network, which comprises 2 super-resolution networks, wherein one super-resolution network is an ESRFBN network model, the other super-resolution network is an EDSR network model, and the multi-optimization-based MR image ESRFBN super-resolution network training method comprises the following steps:
determining an input: multi-slice low resolution image set X (X)t-1,xt,xt+1)mMulti-slice high resolution MR image set Y (Y)t-1,yt,yt+1)m(ii) a The subscript t represents time;
determining an output: the ESRFBN network parameter is theta1EDSR network parameter is theta2
1: setting the hyper-parameters: batch: batch _ size, number of iterations: epoch
2: pre-training model loading
3: training process:
3.1: independent network model training
for 1to epoch do
for 1to m/batch_size do
for 1to batch_size do
(1) Forward calculation of ESRFBN network
Y1=fESRFBN(X;θ1) Wherein Y is1Indicating an ESRFBN network output result
Error calculation
L(θ1)=LMix(Y-Y1) Wherein
Figure BDA0003254412030000051
α=0.84
LMS-SSIMRepresenting a multi-level structure similarity loss function,
Figure BDA0003254412030000061
the L1 loss function is represented,
Figure BDA0003254412030000062
is a gaussian coefficient, α is 0.84.
Parameter optimization
θ'1=Adam(L(θ1))
(2) EDSR network forward computing
Y2=fEDSR(X;θ2) Wherein Y' represents the EDSR network output result
Error calculation
L(θ2)=LMix(Y-Y2) Wherein
Figure BDA0003254412030000063
α=0.84
Parameter optimization
θ'2=Adam(L(θ2))
end for
end for
end for
3.2: fusion layer parameter learning
(1) The parameters of the fused layer were randomly initialized by a zero-mean gaussian distribution with a standard deviation of 0.001.
(2) Independent network outputs respective super-resolution results
Y1=fESRFBN(X;θ1)
Y2=fEDSR(X;θ2)
(3) The fusion layer performs fusion of output results of each network
Y3=fFusion(Y1,Y2;θ3) Wherein theta330 parameters in total representing fusion layer parameters
(4) Error calculation
L(θ3)=LMix(Y-Y3) Wherein
Figure BDA0003254412030000064
α=0.84
(5) Fusion layer parameter update
θ'3=Adam(L(θ3))
4: model parameter output θ1、θ2、θ3
In this embodiment, after the training is completed, the MR image ESRFBN super-resolution network based on multiple optimization needs to be tested, and the testing method includes:
determining an input: multi-slice low resolution image set X (X)t-1,xt,xt+1)k
Multi-slice high resolution MR image set Y (Y)t-1,yt,yt+1)k
The index k indicates the number of slice image groups
Depth model parameter θ1、θ2、θ3
Determining an output: peak signal-to-noise ratio PSNR and structural similarity SSIM
The testing process comprises the following steps:
1: model parameter loading
2: image super-resolution
for 1to k do
(1) Independent network outputs respective super-resolution results
Y1 i=fESRFBN(X;θ1)
Y2 i=fEDSR(X;θ2)
(2) The fusion layer performs fusion of output results of each network
Y3 i=fFusion(Y1 i,Y2 i;θ3)
end for
3: PSNR, SSIM calculation
Figure BDA0003254412030000071
Figure BDA0003254412030000072
4: outputting PSNR and SSIM value experiment results and analyzing:
the super-resolution method of the MR image ESRFBN super-resolution network based on the multiple optimization in this embodiment is compared with a super-resolution algorithm based on interpolation, a super-resolution algorithm based on reconstruction, a super-resolution algorithm based on dictionary learning, and a super-resolution algorithm based on deep learning, and the specific algorithms are as follows:
and (3) BIC: the Bicubic Interpolation (BIC) method estimates the value of an unknown pixel point by using the values of 16 pixel points around the pixel point to be solved, and is a more complicated Interpolation method than the nearest neighbor Interpolation and the bilinear Interpolation. The method utilizes a cubic polynomial to solve an approximate theoretical optimal interpolation function sin (x pi)/x.
IBP: iterative Back Projection (IBP) is to reduce the error between the image obtained by the simulated degradation process and the actual low-resolution image step by step in an Iterative manner, and then perform Back Projection on the error value between the two to update the initially estimated high-resolution MR image, thereby reconstructing a high-resolution image.
POCS: projection Onto Convex Sets (POCS) requires defining closed Convex constraint Sets in a vector space, in which the actual high resolution image is contained. An estimate of the high resolution image is defined as a point within the intersection of these sets of constraints. A high resolution estimated image can be obtained by projecting any one of the initial estimates onto these constraint sets.
MAP: the Maximum A Posteriori (MAP) method is a probabilistic-based algorithm framework, is the most applied method in actual application and scientific research at present, and a plurality of specific super-resolution algorithms can be classified into the probabilistic framework. The basic idea of the maximum a posteriori probability method is derived from conditional probabilities, using a known low-resolution image sequence as an observation result, and estimating an unknown high-resolution image.
JDL: compressive Sensing theory (CS) indicates that an image can be reconstructed accurately under very severe conditions on an overcomplete dictionary from its set of sparse representation coefficients. The Joint dictionary learning method (JDL) can strengthen the similarity between the low-resolution image block dictionary and the high-resolution image block dictionary and the sparse representation of the corresponding real dictionary through Joint training of the low-resolution image block dictionary and the high-resolution image block dictionary, so that the sparse representation of the low-resolution image block and the high-resolution ultra-complete dictionary act together to reconstruct the high-resolution image block, and then the high-resolution image block is connected to obtain a final complete high-resolution image.
IJDL: an Improved Joint dictionary learning method (IJDL) aims at the problem that dictionary Joint cascade training only considers Joint image block pair errors but not high-low resolution dictionary individual reconstruction errors, provides a dictionary training method based on high-low resolution dictionary reconstruction error independent calculation, optimizes a target function, considers high-low resolution dictionary individual reconstruction errors, abandons a traditional cascade calculation method, effectively reduces reconstruction errors, and improves the reconstruction precision of images.
RDN: a Dense Residual Network (RDN) Network adopts a Residual Dense block, which can not only read the state from the previous Residual Dense block through a continuous memory mechanism, but also fully utilize all layers in the previous Residual Dense block through local Dense connection, self-adaptively retain accumulated characteristics through local characteristic fusion (LFF), and combine shallow characteristics and deep characteristics together by utilizing global Residual learning to fully utilize the hierarchical characteristics of the original low-resolution image.
EDSR: an Enhanced Deep Residual error network (EDSR) refers to a mechanism of learning based on Residual errors by the ResNet network, input is divided into two paths after being convoluted by one layer, one path is convoluted once again by n layers of RDB modules, the other path is directly led to a junction for weighted summation, and results are output through upsampling processing and convolution.
SRGAN: SRGAN belongs to generation countermeasure network, which adds a discriminator on the basis of SRResnet, and the role of the SRGAN is to add a discriminator network and 2 losses, and train two networks in an alternate training mode.
ESRGAN: the ESRGAN increases the sensing loss compared with the SRGAN, and replaces the original Residual Block module with a Residual-in-Residual detect Block (RDDB) module, so that the ESRGAN can obtain more vivid and natural textures.
SRFBN: a feedback module is designed in the SRFBN, and a return mechanism is adopted to improve the super-resolution effect. The back-passing has the advantages that no additional parameters are added, and the back-passing for multiple times is equivalent to deepening the network, and the generated image is continuously improved. Although other networks employ similar backhaul architectures, these networks have no way of the front layer getting useful information from the back layer.
ESRFBN: for the ESRFBN, the number of feature maps and the number of group convolutions are increased on the basis of the SRFBN network, the number of feature maps is changed from 32 to 64, and the number of group convolutions is also increased from 3 to 6.
DBPN: the network employs an up-down sampling layer that provides an error feedback mechanism at each stage. And interdependent up-down sampling blocks are constructed, each block representing a different image degradation and high resolution component. The network enables the characteristics of the up and down sampling stages to be connected, and improves the super-resolution effect result of the image.
The method and the experimental results of each method are shown in table 1 and table 2, and it can be seen from the table that the method provided by the embodiment obtains the optimal results at four parts of the human body and under two types of hyper-resolution scales, and the evaluation index value promotion amplitude is large. Compared with a suboptimal method, under the scale multiplied by 2, the PSNR values are respectively increased by 4.93, 5.90, 5.42 and 3.44; SSIM values are increased by 0.0019, 0.0191, 0.0105 and 0.0206, respectively. At the scale x 4, PSNR values are raised by 5.94, 6.07, 5.65, 4.68, respectively; SSIM values are increased by 0.0216, 0.0422, 0.0564 and 0.0418, respectively. The index improvement value shows that the MR image ESRFBN super-resolution network based on the multiple optimization is more suitable for the MR image super-resolution task, the limitation of the deep learning network on the application of the MR image super-resolution task can be effectively solved, and the experimental result proves the effectiveness of the method provided by the embodiment.
TABLE 1 PSNR values
Figure BDA0003254412030000091
Figure BDA0003254412030000101
TABLE 2 SSIM values
Figure BDA0003254412030000102
The method aims at three-point limitations that the MR image sequence characteristics are not fully considered on the MR image super-resolution task by the deep learning network, the optimal solution is difficult to obtain by a loss function, and the balance between the model complexity and the training difficulty is difficult to obtain. Firstly, experiments are carried out on 2 super-resolution scales of 4 human body parts of the head, the breast, the knee and the head and neck by adopting 7 current (the number of deep learning networks is RDN, EDSR, DBPN, SRGAN, ESRGAN, SRFBN and ESRFBN) deep learning networks with good super-resolution effect, and quantitative experiment results show that the ESRFBN network is more suitable for MR image super-resolution tasks. Secondly, experimental results show that the multiple optimized ESRFBN super-resolution network based on multi-slice input, multi-loss function cascade and heterogeneous network fusion, which is provided by the embodiment, has better super-resolution effect compared with other networks. Finally, the invention provides a MR image ESRFBN super-resolution network algorithm based on multiple optimization, integrates three optimization methods, and quantitative experimental results show that the MR image ESRFBN super-resolution network based on multiple optimization is superior to a super-resolution algorithm based on interpolation, a super-resolution algorithm based on reconstruction, a super-resolution algorithm based on dictionary learning and other super-resolution algorithms based on a deep learning network, and can greatly improve the MR image super-resolution effect, thereby proving the effectiveness of the method provided by the invention.
Although the invention herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present invention. It is therefore to be understood that numerous modifications may be made to the illustrative embodiments and that other arrangements may be devised without departing from the spirit and scope of the present invention as defined by the appended claims. It should be understood that features described in different dependent claims and herein may be combined in ways different from those described in the original claims. It is also to be understood that features described in connection with individual embodiments may be used in other described embodiments.

Claims (10)

1. An MR image depth network super-resolution method, characterized in that the method comprises:
s1, acquiring an MR image training set, including a low-resolution MR image set and a corresponding high-resolution MR image set;
s2, training the super-resolution deep learning network by using an MR image training set, wherein a low-resolution MR image is used as input, and a high-resolution MR image is used as output;
the super-resolution deep learning network comprises a fusion layer and a plurality of super-resolution networks, wherein the input of each super-resolution network is a low-resolution MR image, the output of each super-resolution network is simultaneously input to the fusion layer, and the fusion layer outputs a high-resolution MR image;
and S3, reconstructing the low-resolution MR image into a high-resolution MR image by using the trained super-resolution deep learning network.
2. The MR image depth network super-resolution method according to claim 1, wherein the S2 includes:
respectively training each super-resolution network by using an MR image training set, optimizing parameters of each super-resolution network and finishing the training of each super-resolution network;
and taking the low-resolution MR image as input, respectively acquiring the super-resolution results of each super-resolution network after the training, taking the super-resolution results of each super-resolution network as the input of the fusion layer, training the fusion layer, optimizing the parameters of the fusion layer, and finishing the training of the fusion layer.
3. The MR image depth network super-resolution method according to claim 2, wherein in S2, the super-resolution network and the fusion layer may use a cascade of different loss functions during training.
4. The MR image depth network super-resolution method according to claim 3,
in S2, the training method for each super-resolution network/fusion layer is as follows:
reconstructing a low-resolution MR image in an MR image training set into a high-resolution MR image by utilizing a super-resolution network/fusion layer, calculating the error between the high-resolution MR image and the corresponding high-resolution MR image in the MR image training set, wherein the loss function in error calculation is the cascade of the loss functions of the super-resolution networks, and updating the parameters of the super-resolution network/fusion layer according to the calculated error.
5. The MR image depth network super-resolution method according to claim 1, wherein different weights can be configured for the loss functions of each super-resolution network when the loss functions of each super-resolution network are cascaded.
6. The MR image depth network super-resolution method according to claim 5, wherein in S2, an Adam optimizer is adopted to optimize the calculated error, and parameters after super-resolution network/fusion layer optimization are obtained.
7. The MR image depth network super-resolution method according to claim 1 or 5, wherein the input of the super-resolution network and the fusion layer are both multi-slice low-resolution images of the MR image, and the output is both corresponding multi-slice high-resolution MR images.
8. The MR image depth network super-resolution method according to claim 7, wherein the super-resolution network is composed of one independent network or a cascade of a plurality of independent networks.
9. The MR image depth network super-resolution method according to claim 8, wherein the super-resolution deep learning network comprises 2 super-resolution networks, one of which is ESRFBN network model and the other is EDSR network model.
10. The MR image depth network super-resolution method according to claim 9, wherein the loss function of the super-resolution network during training is:
Figure FDA0003254412020000021
LMS-SSIMrepresenting a multi-level structure similarity loss function,
Figure FDA0003254412020000023
the L1 loss function is represented,
Figure FDA0003254412020000022
is a gaussian coefficient, α is 0.84.
CN202111055372.0A 2021-09-09 2021-09-09 MR image depth network super-resolution method based on multiple optimization Pending CN113744132A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111055372.0A CN113744132A (en) 2021-09-09 2021-09-09 MR image depth network super-resolution method based on multiple optimization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111055372.0A CN113744132A (en) 2021-09-09 2021-09-09 MR image depth network super-resolution method based on multiple optimization

Publications (1)

Publication Number Publication Date
CN113744132A true CN113744132A (en) 2021-12-03

Family

ID=78737735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111055372.0A Pending CN113744132A (en) 2021-09-09 2021-09-09 MR image depth network super-resolution method based on multiple optimization

Country Status (1)

Country Link
CN (1) CN113744132A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116805284A (en) * 2023-08-28 2023-09-26 之江实验室 Feature migration-based super-resolution reconstruction method and system between three-dimensional magnetic resonance planes

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116805284A (en) * 2023-08-28 2023-09-26 之江实验室 Feature migration-based super-resolution reconstruction method and system between three-dimensional magnetic resonance planes
CN116805284B (en) * 2023-08-28 2023-12-19 之江实验室 Feature migration-based super-resolution reconstruction method and system between three-dimensional magnetic resonance planes

Similar Documents

Publication Publication Date Title
CN111445390B (en) Wide residual attention-based three-dimensional medical image super-resolution reconstruction method
CN107610194B (en) Magnetic resonance image super-resolution reconstruction method based on multi-scale fusion CNN
CN108460726B (en) Magnetic resonance image super-resolution reconstruction method based on enhanced recursive residual network
CN113012172B (en) AS-UNet-based medical image segmentation method and system
CN110070612B (en) CT image interlayer interpolation method based on generation countermeasure network
CN111899165A (en) Multi-task image reconstruction convolution network model based on functional module
CN111487573B (en) Enhanced residual error cascade network model for magnetic resonance undersampling imaging
CN111091616A (en) Method and device for reconstructing three-dimensional ultrasonic image
CN114266939B (en) Brain extraction method based on ResTLU-Net model
Qiu et al. Gradual back-projection residual attention network for magnetic resonance image super-resolution
CN112669248A (en) Hyperspectral and panchromatic image fusion method based on CNN and Laplacian pyramid
CN111696042B (en) Image super-resolution reconstruction method based on sample learning
CN111260667A (en) Neurofibroma segmentation method combined with space guidance
CN114943721A (en) Neck ultrasonic image segmentation method based on improved U-Net network
CN113744132A (en) MR image depth network super-resolution method based on multiple optimization
CN113870327B (en) Medical image registration method based on prediction multi-level deformation field
CN112990359B (en) Image data processing method, device, computer and storage medium
Qiu et al. Cardiac Magnetic Resonance Images Superresolution via Multichannel Residual Attention Networks
CN114511602B (en) Medical image registration method based on graph convolution Transformer
CN112258508B (en) Image processing analysis segmentation method, system and storage medium for four-dimensional flow data
CN115311135A (en) 3 DCNN-based isotropic MRI resolution reconstruction method
CN114049334A (en) Super-resolution MR imaging method taking CT image as input
CN113822865A (en) Abdomen CT image liver automatic segmentation method based on deep learning
CN113066145B (en) Deep learning-based rapid whole-body diffusion weighted imaging method and related equipment
CN117333571B (en) Reconstruction method, system, equipment and medium of magnetic resonance image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination