CN111899165A - Multi-task image reconstruction convolution network model based on functional module - Google Patents

Multi-task image reconstruction convolution network model based on functional module Download PDF

Info

Publication number
CN111899165A
CN111899165A CN202010548135.7A CN202010548135A CN111899165A CN 111899165 A CN111899165 A CN 111899165A CN 202010548135 A CN202010548135 A CN 202010548135A CN 111899165 A CN111899165 A CN 111899165A
Authority
CN
China
Prior art keywords
network
reconstruction
sub
module
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010548135.7A
Other languages
Chinese (zh)
Inventor
包立君
叶富泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University
Original Assignee
Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University filed Critical Xiamen University
Priority to CN202010548135.7A priority Critical patent/CN111899165A/en
Publication of CN111899165A publication Critical patent/CN111899165A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention relates to a function module-based multi-task image reconstruction convolution network model which comprises an embedding sub-network, an inference sub-network and a reconstruction sub-network. Designing a recursion residual module as a basic unit by utilizing the excellent performances of a residual network and recursion learning, and constructing a deep convolutional neural network with good performance by adopting a multi-layer supervision strategy; by adding application-oriented functional modules: the imaging principle self-adaptive error correction module, the high-frequency characteristic guide module and the improved dense connection mode further enhance the performance of the network. A network model with good performance is designed, is suitable for compressed sensing imaging, can be applied to super-resolution imaging in an expansion way, can realize effective high-quality reconstruction of low-quality magnetic resonance images, obtains more accurate image structure information and texture details, gives consideration to network depth, reduces network parameters and calculated amount, and has the advantages of preventing overfitting, saving storage space, ensuring network generalization capability and the like.

Description

Multi-task image reconstruction convolution network model based on functional module
Technical Field
The invention relates to the technical field of image processing and machine learning based on a convolutional neural network, in particular to a multi-task image reconstruction convolutional network model based on a functional module.
Background
Magnetic Resonance Imaging (MRI) is not only non-invasive, but also can acquire a plurality of Imaging modalities with excellent contrast for analyzing different anatomical features of a patient. However, MRI has long data acquisition time, strict requirements for acquiring objects, and high imaging price, and these factors limit its wide application. The application of the CS technology in MRI provides a feasible method for accelerating MR imaging. CS methods generally undersample k-space at a rate below the nyquist sampling rate, thereby producing incoherent undersampling artifacts, and reconstruct a high-quality image that can be used for medical diagnosis by nonlinear optimization methods. Although a number of CS-MRI studies have shown good clinical application, conventional MRI scans are still currently based on fully sampled cartesian sequences or accelerated acquisition using parallel imaging. The limitations of CS-MRI are mainly reflected in the following aspects: (1) the sparse transformations, which are widely used at present, still appear too simple for processing biological tissues with complex structures. For example, although the sparse transform based on TV can restrict the local information mutation of the reconstructed image, the sparse transform also introduces step-like artifacts; wavelet transforms, while they can force isotropic information of an image, can introduce blocking artifacts. (2) nonlinear optimization algorithms typically require an iterative optimization process, resulting in lengthy reconstruction times. (3) The current CS-MRI method based on an optimized model generally needs to set various optimized parameters, and improper parameter setting can cause over-restriction, thereby causing the reconstructed image to look unnatural, such as over-smoothing or residual undersampling artifacts.
The excellent performance of Deep Learning (DL) in the field of computer vision has attracted widespread interest in the field of medical image processing in recent years. CS-MRI reconstruction is similar to the inverse problems of super-resolution reconstruction, denoising and the like in the field of natural image processing, and a Convolutional Neural Network (CNN) can be used for initializing an optimization-based CS-MRI method or serving as a constraint item of an optimization model. The U-net model has a multi-scale learning mode and can cover the receptive field of the whole image, is widely applied to the field of image segmentation, and meanwhile has wide attention in the aspect of processing CS-MRI undersampled reconstruction problems. The variational network embeds a traditional CS-MRI variational model into a trainable, expanded gradient descent scheme, in which all parameters are trainable. A Deep cascade Convolutional Neural Network (DC-CNN) construction data consistency layer is embedded between cascaded residual blocks so as to correct Network prediction by using sampled k-space data in a Network reconstruction process. The MRI acquired data is k-space data, and the other study mainstream is that the network directly inputs the k-space data, and the neural network simultaneously predicts k-space under-sampling points and image space characteristic information. Wherein KIKIKI-Net is stacked by KCNN processing k-space data and ICNN processing image domain alternately, or CNN is directly used for learning k-space interpolation, even a flow pattern estimation Approximation variation network (AUTOMAP) directly learns Fourier transformation process from k-space data to image domain. Research shows that more accurate estimation can be obtained along with the deepening of the network layer number, a generated confrontation network (GAN) trains a generation model in a zero-confrontation training mode, and the leap-forward progress is obtained in the field of image generation. In the field of CS-MRI reconstruction, training heavyweight models in conjunction with GAN and very deep CNN networks has achieved advanced results.
On the other hand, due to the limitation of sampling time and the motion of the sampling object, the sampling density is often difficult to achieve the requirement of high resolution. Therefore, post-processing techniques, such as k-space extrapolation zero-filling or temporal spatial interpolation, are typically employed to improve the resolution of the image. The introduction of Super Resolution reconstruction (SR) provides another effective method for improving MRI Resolution, which can effectively recover new information and improve the certainty of medical diagnosis while improving image Resolution. The SR-MRI reconstruction method can reconstruct a High Resolution (HR) image from a single or multiple Low Resolution (LR) images. A number of research efforts have proposed various SR methods, with the common goal of reconstructing an HR image as accurately as possible while minimizing noise, using a priori information about the HR source signal, such as image smoothing. Interpolation-based SR methods, such as nearest neighbor, bilinear interpolation and Bicubic (Bicubic), are common, simple and fast, and although these interpolation methods increase the spatial size of an image, they cannot introduce useful information that can be used for diagnosis, but rather introduce unwanted artifacts, such as blurring, ringing effects, and the like. On the other hand, the constraint-based method can well solve the ill-posed problem of the SR problem, such as TV-based constraint, non-local mean constraint and the like. Sparse coding based methods are more complex and obtain HR images by sparse representation over a specific sparse domain, such as dictionary learning methods.
The existing research shows that the method based on the convolutional neural network has a more obvious image super-resolution reconstruction effect than the sparse reconstruction method. The srcnn (super Resolution volumetric Neural network) model was first used for SR reconstruction of natural images, using 3 Convolutional layers to learn the end-to-end mapping from LR images to HR images. Inspired by a deep network VGG-net for ImageNet image segmentation, Jiwon Kim et al constructs a deeper VDSR (VeryDeep volumetric network for Super resolution) model, which is composed of 20 layers of convolution layers and is mainly used for learning high-frequency residual information between an input LR image and an output HR image. In order to deepen the network depth without increasing network parameters and further improve the network performance, Jiwon Kim and the like simultaneously use recursive learning in a CNN network to construct a deep recursive convolutional network model DRCN for SR reconstruction. DRCN has the same network depth as VDSR but achieves better performance than VDSR. Subsequent researchers combine the learning strategies of recursive learning, residual error network and dense connection network, generation of an antagonistic network GAN and the like with the network model, and successively provide a series of advanced SR reconstruction models based on CNN, such as a deep recursive residual error network DRRN combined with the recursive learning and the residual error network, a SRDenseNet based on the dense connection network, an Enhancenet based on GAN, a working SRGAN combined with GAN for residual error learning, and the like. In view of the superior performance achieved by CNN-based SR reconstruction in natural images, many researchers have used the same idea in SR-MRI.
MRI often requires a reduced sampling rate to speed up acquisition, and model-optimized MRI reconstruction algorithms can reconstruct ideal MR images well below the nyquist sampling rate. However, as the sampling rate decreases, the reconstruction performance of the conventional optimization method shows significant limitations.
Disclosure of Invention
The invention mainly aims to overcome the defects of the existing magnetic resonance image reconstruction technology, provides a multi-task image reconstruction convolution network model based on a functional module, and is used for realizing compressed sensing reconstruction and super-resolution reconstruction of a magnetic resonance image, so that excellent reconstruction of different low-quality magnetic resonance images can be obtained, and the flexibility and application expansibility of a network architecture are improved.
The invention adopts the following technical scheme:
a multi-task image reconstruction convolution network model based on functional modules is characterized in that: based on a recursive residual network, including an embedding sub-network, an inference sub-network and a reconstruction sub-network; the embedding sub-network is used for extracting the characteristics of an input low-quality image, the reasoning sub-network is used for learning residual information between the embedding sub-network and the reconstruction sub-network, and the output of the reasoning sub-network and the output of the embedding sub-network are added through cross connection and input into the reconstruction sub-network for image reconstruction.
Preferably, the embedding sub-network and the reconstruction sub-network are respectively provided with an adaptive error correction module to ensure data fidelity between a high-quality image output by network reconstruction and a low-quality image originally input by the network.
Preferably, for compressed sensing magnetic resonance imaging reconstruction, the error correction module of the network is designed based on data consistency to ensure consistency of the network reconstruction output and the acquired k-space data.
Preferably, in the compressed sensing magnetic resonance imaging reconstruction, the embedded sub-network performs primary processing on the original low-quality image input by the network by using the self-adaptive error correction module to remove part of undersampling artifacts, and then performs subsequent feature extraction; the self-adaptive error correction module comprises two pre-activated convolutional layers, a cross-connection layer and a data consistency layer; before a high-quality image is reconstructed by adopting a multi-layer supervision strategy, a self-adaptive error correction module of a reconstruction sub-network carries out deviation correction on the network prediction output by each residual error module so as to ensure that the reconstructed image and the acquired k-space data have consistency.
Preferably, for super-resolution magnetic resonance imaging reconstruction, a back projection method is adopted to design a self-adaptive error correction module of the network.
Preferably, in the super-resolution magnetic resonance imaging reconstruction, the adaptive error correction module embedded into the sub-network up-samples the LR feature space in a back projection manner, then performs up-sampling operation by using an up-sampling back projection unit, then extracts the high-resolution feature, back-projects the up-sampled HR feature domain into the original LR feature domain, calculates the projection error, and blends the projection error into the HR feature space; and the adaptive error correction module in the reconstruction sub-network reversely projects the HR characteristic diagram predicted by the network to an LR characteristic domain for error correction, the HR characteristic output by the embedded sub-network and the HR residual error characteristic predicted and reconstructed by the inference sub-network are added through cross connection to obtain an HR characteristic diagram predicted by the network, and then the HR characteristic diagram is input into the back projection module for back projection operation, so that the HR domain projection error and the HR characteristic diagram predicted by the network are added to obtain error correction output.
Preferably, a recursive learning strategy is used when constructing the inference sub-network; the recursive residual block comprises two pre-activated convolution layers and a cross-connection from an input end to an output end, the convolution operation uses a pre-activated learning strategy, and batch normalization and linear rectification processing are performed before each layer of convolution operation.
Preferably, a multi-layer supervision strategy is adopted, the output characteristics of each recursive residual block are all transmitted to a reconstruction sub-network, the reconstruction sub-network respectively carries out convolution operation on the characteristics from the n recursive residual blocks, then intermediate prediction values corresponding to each recursive residual block are calculated through cross connection and embedding sub-network characteristic addition, and the final network reconstruction is a weighted average of all intermediate prediction results.
Preferably, the system further comprises a high-frequency feature guiding function module embedded in the reconstruction sub-network to constrain the reconstructed solution space and prevent prediction deviation.
Preferably, the inference subnetwork uses a dense connection mode, and the output of each previous residual block is used as the input of the current module, so that the feature recycling and the gradual estimation of complex feature information are facilitated.
As can be seen from the above description of the present invention, compared with the prior art, the present invention has the following advantages:
the model of the invention uses the excellent performance of residual error network and recursion learning for reference, uses the residual error module of repeated recursion as the basic unit, adopts multilayer supervision strategy, constructs the application-oriented deep convolution neural network, reduces the network parameter quantity and the calculated quantity compared with the network with the same depth, and simultaneously has the advantages of preventing overfitting, saving storage space and ensuring the generalization capability of the network. By adding self-designed functional modules: the method solves the problem of gradient disappearance by carrying out close cross connection between residual modules, utilizes the image high-frequency characteristics to guide and improve the capability of network reconstruction structure details and texture information, designs a self-adaptive error correction module according to the imaging principle to improve the reconstruction error of high-resolution characteristics, further enhances the network performance, is applied to multitask undersampling imaging, and realizes effective and accurate high-resolution reconstruction of low-resolution magnetic resonance images.
The model is used for undersampling magnetic resonance imaging reconstruction and comprises two different applications of compressed sensing imaging CS-MRI and super-resolution imaging SR-MRI. By designing a network model with good performance, a high-quality image with more structural information is reconstructed, the effective reconstruction of the magnetic resonance imaging under the low sampling rate is realized, the depth of the network is considered, and the network parameters and the calculated quantity are reduced.
The model of the invention uses the recursion learning strategy when constructing the reasoning sub-network, namely, the weight parameter of each residual block in the reasoning sub-network is shared, and no additional network parameter is needed while the network depth is increased, thereby having the advantages of preventing overfitting, saving parameter space, ensuring the generalization capability of the network model, and the like.
Drawings
Fig. 1 is a basic model of a recursive residual network.
FIG. 2 is a schematic structural diagram of an MRRN network applied to CS-MRI reconstruction.
FIG. 3 is a schematic structural diagram of an MRRN network applied to SR-MRI reconstruction.
FIG. 4 is a comparison of the reconstruction results of CS-MRI of brain images under Radial sampling templates at 10% and 20% sampling rates.
FIG. 5 is a comparison of the results of SR-MRI reconstructions of 4X 4 and 3X 3 downsampled brain images.
FIG. 6 shows the result of CS-MRI reconstruction of brain tumor data with zero learning.
The invention is described in further detail below with reference to the figures and specific examples.
Detailed Description
The invention is further described below by means of specific embodiments.
A multi-task image Reconstruction convolution network model based on functional modules comprises an Embedding sub-network (Embedding), an Inference sub-network (Inference) and a Reconstruction sub-network (Reconstruction), and the like; the embedding sub-network is used for extracting the characteristics of an input low-quality image, the reasoning sub-network is used for learning residual information between the embedding sub-network and the reconstruction sub-network, then the output of the reasoning sub-network and the output of the embedding sub-network are added through cross connection, and the input of the output of the reasoning sub-network and the output of the embedding sub-network is input into the reconstruction sub-network for image reconstruction.
The network of the invention is based on recursive residual learning, and adds a series of application-oriented functional modules to enhance the network performance, including an adaptive error correction module, high-frequency feature guidance and improved dense connection according to the imaging principle; by designing a network model with a flexible and extensible structure, the multi-task magnetic resonance image reconstruction is realized, the reconstruction result has more structural details and texture information, the network depth can be considered, and the network parameters and the calculated amount are reduced.
The basic model of the recursive residual network is shown in FIG. 1The structure of the Residual Block (RB) is shown in fig. (a) and (b), where preatconv denotes the pre-active convolutional layer. (c) Recursive residual network, Input is network Input, Embed, ResConv and Recon are pre-activation convolutional layers, Eltsum represents element addition, and Multi-Supervision is a Multi-layer Supervision strategy. The basic model combines a recursive learning strategy with a residual error module to realize weight sharing of all residual error modules. And adopting a multi-layer supervision strategy, and conveying the output characteristics of each recursive residual block to a reconstruction sub-network. The reconstruction sub-network performs a convolution operation on the features from each recursive residual block using a residual convolution layer ResConv, and then reconstructs an intermediate prediction from the reconstruction layer Recon by adding the cross-connect and embedded sub-network features
Figure BDA0002541496680000051
The final network output is a weighted average of all intermediate prediction results
Figure BDA0002541496680000052
Wherein ω isiFor scalar weight parameters that can be obtained by network training, n represents the number of recursive residual blocks. The rectangle marked by RBi in fig. 1 (c) represents a residual block, and the specific structure of the residual block is shown in fig. (a) and (b). For simplicity of drawing, the weighted average process of the intermediate reconstruction results is labeled as Multi-supervisory unit, and the specific structure of fig. 2 and 3 will be omitted.
The invention adopts an imaging principle self-adaptive error correction module which is respectively added to the embedding sub-network and the reconstruction sub-network so as to ensure the fidelity of data between a high-quality image output by network reconstruction and a low-quality image input originally by a network. Aiming at a Magnetic Resonance Compressed Sensing Imaging (CS-MRI) reconstruction task, an error correction module of a network is designed based on data consistency so as to ensure that the network reconstruction output and the acquired k-space data have consistency. Aiming at a magnetic Resonance Super-Resolution Imaging (SR-MRI) reconstruction task, an error correction module of a network is designed by adopting a back projection method.
The data consistency layer is used as an error correction module of the MRRN to prevent the input original acquisition data from generating deviation in the deep network transmission process, and is applied to the embedding sub-network and the reconstruction sub-network of the MRRN at the same time, as shown in fig. 2: (a) the data consistency embedding module DC _ Embedded, wherein DataConsistence is a data consistency layer; (b) high frequency filters G in four directions, horizontal, vertical and diagonal, respectively. (c) MRRN network model. The embedded sub-network utilizes a residual block and a data consistency layer as the initial reconstruction of the network, expects to remove a certain amount of undersampling artifacts initially, and extracts initial characteristics through the embedded layer.
The embedded sub-network firstly carries out primary processing on an original low-quality image input by the network by using an error correction module DC _ Embed to remove partial under-sampling artifacts, and then carries out subsequent feature extraction, wherein the DC _ Embed consists of two pre-activated convolution layers PreActConv and a cross-connection and Data Consistency layer (Data Consistency); before a high-quality image is reconstructed by adopting a multi-layer supervision strategy, a data consistency layer of a reconstruction sub-network carries out deviation correction on the network prediction output by each residual error module so as to ensure that the reconstructed image is consistent with the acquired k-space data, and a mapping function fDC (·) of the data consistency layer is output as follows
Figure BDA0002541496680000061
X denotes an image to be reconstructed, Y denotes an acquired image, k corresponds to a coordinate vector index of k-space, and Y (k) denotes acquired k-space data. F represents the Fourier transform, FHRepresenting an inverse fourier transform. In the re-establishment of the sub-network,
Figure BDA0002541496680000062
the intermediate result of the reconstruction of the i-th residual module output by the reconstruction layer Recon.
Figure BDA0002541496680000063
Representing the adoption of data consistency layer pairs
Figure BDA0002541496680000064
And performing reconstruction prediction of each residual module after error correction. Using Fourier transform F to
Figure BDA0002541496680000065
Conversion to k-space is available
Figure BDA0002541496680000066
Namely, it is
Figure BDA0002541496680000067
And lambda is a positive weighting parameter, and linear fitting is performed between the predicted data and the original k-space acquired data through a data consistency layer. The value of λ, i.e. the constraint coefficient for balancing the data fidelity, is generally related to the noise of the input data, and the larger the noise, the smaller the value of λ, where the value of λ is determined by the network training.
The Iterative Back Projection (IBP) algorithm is a Back Projection error correction mechanism that iterates repeatedly between HR and LR images, is simple and practical, and is widely applied to traditional super-resolution reconstruction of natural images. Inspired by the fact, Muhammad Haris et al propose a depth back projection network for SR reconstruction of natural images. Compared with the traditional IBP method, DBPN can determine the optimal back projection convolution kernel through a network learning form, and the back projection from the HR characteristic domain to the LR characteristic domain is more effective in the characteristic space. Compared with other existing SR reconstruction methods based on CNN, DBPN can make full use of the interdependence relationship between LR images and HR images, so that the advantage in processing serious downsampling SR reconstruction problems is more obvious. From the perspective of the optimization-based SR reconstruction method, the backprojection plays a role as a data fidelity term in the conventional optimized reconstruction mathematical model.
Therefore, in the present application, an error correction module based on Back Projection is added in the recursive residual network, and in fig. 3, (a) an Upsampled Back Projection module (Up-BP) embedded in a sub-network, S corresponds to a downsampled sxs; (b) reconstructing a Back Projection module (Back Projection) in the subnetwork; (c) high-frequency filters G respectively along the horizontal direction, the vertical direction and the diagonal direction; (d) MRRN network model. Fig. 3(d) shows a system comprising an Up-sampling back-projection module (Up-BP) embedded in a subnetwork and a back-projection module (BackProjection) in a reconstruction subnetwork.
In the SR-MRI network, an error correction module embedded into a sub-network is up-sampled from an LR feature space to an HR feature space in a back projection mode, namely Low Resolution (LR) features are extracted from Low-quality input, then an up-sampling back projection unit is used for performing up-sampling operation, then High Resolution (HR) features are extracted, an up-sampled HR feature domain is back-projected to an original LR feature domain, and a projection error is calculated and fused into the HR feature space; and an error correction module in the reconstructed sub-network reversely projects the HR characteristic diagram predicted by the network to an LR characteristic domain for error correction, adds the HR characteristic output by the embedded sub-network and the HR residual characteristic predicted and reconstructed by the reasoning sub-network through cross connection to obtain an HR characteristic diagram predicted by the network, and inputs the HR characteristic diagram and the HR characteristic diagram predicted by the network into a reverse projection module for reverse projection operation, so that the HR domain projection error and the HR characteristic diagram predicted by the network are added to obtain error correction output.
Specifically, the Up-sampling back projection module Up-BP is used for the Up-sampling operation of the embedded sub-network as shown in fig. 3 (a). And (3) reversely projecting the up-sampled HR characteristic domain to the original LR characteristic domain, calculating a projection error and then merging the projection error into an HR characteristic space, wherein the operation process is as follows:
(1) upsampling input LR feature map F using deconvolution Deconv 1/Semb
Fup=Wup(τ(Femb))
Wherein the deconvolution layer WupThe convolution kernel size is set to 2S-mod (S, 2), S corresponds to the downsampling S × S, and τ represents the pre-activation function;
(2) convolution operation W by step SdnDownsampling HR feature map Fup:Fdn=Wdn(τ(Fup));
(3) Using a compound of formula (I) and (II)upDeconvolution layer W set in the same manneruerProjection error F of up-sampled LR feature fieldeLR=Fdn-Femb:Fuer=Wuer(τ(FeLR));
(4) And (3) correcting and upsampling the HR characteristic by using the projection error of the HR characteristic domain as a module output: fHR=Fuer+Fup
The above up-sampling backprojection unit can be formulated as: fHR=fuBP(Femb). In the above formula Fup、Fdn、FeLR、Femb、FuerCharacteristic diagrams representing the outputs of different convolutional layers or network nodes, respectively, fuBPRepresenting the mapping function of the upsampled backprojection unit (labeled Up-BP in FIG. 3 a), FembAnd FHRRespectively an input feature map and an output feature map of the up-sampling back-projection unit.
The BackProjection module BackProjection in the reconstruction subnetwork is shown in fig. 3 (b). And (4) before inputting Recon to reconstruct an HR image, back projecting the HR characteristic diagram predicted by the network to an LR characteristic domain for error correction. Exporting HR domain feature F by cross-connection to embed sub-networkHRHR domain residual error feature F reconstructed with inference subnetworkr2iAdding to obtain the HR characteristic diagram F of the network predictionsum. F is to besumThe backward projection operation of the input direction projection module BackProjection is as follows:
(1) using a convolution layer W of step SdBPDown-sampling: fdBP=WdBP(τ(Fsum))。
(2) Projection error F of up-sampled LR domaineBP=FdBP-Femb:FuBP=WuBP(τ(FeBP))。
(3) Projecting the HR domain error FuBPAnd input FsumAdding to obtain error correction output: fHBP=Fsum+FuBP
Finally, the above backprojection operation is formulated as FHBP=fBP(Fsum,Femb) Wherein f isBPMapping function, F, representing the BackProjection modulesum,FembIs an input feature map of the backprojection module, FHBPIs to turn overOutput feature map to projection module
In order to better retain the negative value information output by the network and accelerate the training process, all the convolution layers in the invention adopt a pre-activation strategy, namely, the convolution operation is carried out after activation. The residual block using the pre-activation strategy is shown in fig. 1(b), and is composed of 2 pre-activation convolution units, PreActConv, each of which is shown in fig. 1(a), and is composed of cross-connections from input to output, wherein each PreActConv sequentially includes a batch normalization layer BN, a linear rectification layer ReLU, and a convolution layer Conv. Label the input and output of the ith residual block as Bi-1And BiThen the residual block can be expressed as the following equation
Figure BDA0002541496680000081
Wherein f isRB(. -) represents the mapping function of the residual block, τ represents the pre-activation function, including the batch normalization BN and the linear rectification function ReLU,
Figure BDA0002541496680000082
and
Figure BDA0002541496680000083
two convolutional layer parameters that are shared for all recursive residual blocks, n represents the number of residual blocks.
Low quality MRI imaging generally acquires most of the low frequency information, while most of the high frequency information of the image is lost. However, the high frequency part contains important structural information of the imaging target, especially texture details and structural edges, and the high frequency information is easily damaged in the acquisition process, and is difficult to recover. In order to reconstruct more image high-frequency feature information, a high-frequency feature guiding function module is designed and embedded into a reconstruction sub-network so as to restrict a reconstruction solution space and prevent prediction deviation.
As shown in fig. 2(b), the present application designs four high-frequency filters in the horizontal, vertical and diagonal directions, respectively, for extracting high-frequency feature information of an image. Other predictions may also be used, depending on the particular solution objectiveFixed filters, such as gradient filters. From zero-filled images X using a directional filter GuExtracting high frequency features
Figure BDA0002541496680000084
And with XuConnected in parallel in the direction of the channel as network input
Figure BDA0002541496680000085
Extracting high frequency feature information from tag image X
Figure BDA0002541496680000086
And guiding the reconstruction of the high-frequency information of the network. The high frequency feature directing module directs the network feature F as shown by the dashed box in FIG. 2(c)r1iInput to a feature reconstruction convolution layer, FeaRecon, at XhHigh-frequency structure information of reconstructed image under supervision
Figure BDA0002541496680000087
Then will be
Figure BDA0002541496680000088
And Fr1iAre connected in parallel
Figure BDA0002541496680000089
And returning to the network.
For the image reconstruction problem, the feature extraction of the network is a coarse-to-fine process, and the feature information learned by each level plays an important role in the final reconstruction result. However, when the depth of the network is increased, the feature information learned in the past is transmitted to the back of the network, and information loss occurs. Meanwhile, the gradient vanishing problem caused by deepening the network depth can also hinder the network training. The proposed solution to the above problem is to provide a good solution for the dense connection network, which enhances information transmission by using the feature outputs of all previous layers as the input of the current layer, while facilitating network learning to more complex patterns. However, as the network convolution layers are stacked, the number of channels input to each layer increases at a certain rate, and thus, a 1 × 1 convolution operation is required to reduce the number of channels input to each layerLow channel number. As shown in the connection curve labeled in fig. 2, a dense connection mode is used on an inference subnetwork of the recursive residual error network, and the output of each previous residual error block is used as the input of the current module, thereby facilitating the reuse of features and the gradual estimation of complex feature information. Different from the existing intensive connection mode, in consideration of a recursive learning strategy, namely that each residual block shares the same weight parameter and outputs consistent characteristic information, only one scalar mu which can be determined through network training is distributedjThe weighted average is given to the output of each previous residual block as the input of the current residual block. Thus, the input for each residual block is restated as
Figure BDA0002541496680000091
The invention uses the recursion learning strategy when constructing the reasoning sub-network, namely the weight parameter of each residual block in the reasoning sub-network is shared, and no additional network parameter is needed while the network depth is increased, thus having the advantages of preventing overfitting, saving parameter space, ensuring the generalization capability of a network model and the like; the residual block is composed of two pre-activated convolution layers and a cross-connection from input to output, the convolution operation uses a pre-activation learning strategy, and Batch Normalization (BN) and linear rectification (ReLU) processing are performed before each layer of convolution operation.
Further, a multi-layer supervision strategy is adopted, the output characteristics of each recursive residual block are all transmitted to a reconstruction sub-network, the reconstruction sub-network respectively carries out convolution operation on the characteristics from the n recursive residual blocks, then intermediate prediction values corresponding to each recursive residual block are calculated through cross connection and sub-network embedding characteristic addition, and the final network reconstruction is the weighted average of all intermediate prediction results.
Wherein, the structure of the loss function is combined with a multi-layer supervision method to integrate and reconstruct the high-frequency characteristics
Figure BDA0002541496680000092
Inter prediction for each residual block
Figure BDA0002541496680000093
And final network prediction
Figure BDA0002541496680000094
Defining a loss function as:
Figure BDA0002541496680000095
in the formula XtIndicates the t-th training sample label,
Figure BDA0002541496680000096
the method is the intermediate prediction of each residual module on the t-th training sample, and the final network prediction of each sample is the weighted average of all the intermediate prediction results, namely
Figure BDA0002541496680000097
Wherein
Figure BDA0002541496680000098
Represents the final reconstruction result, ω, of the network for the t-th sampleiIs a scalar weight parameter, determined by network training. n is the number of residual blocks, T is the number of training samples, and the network loss equation is constructed with a norm of l 1.
The following experiments were performed based on the model of the invention:
the magnetic resonance brain map dataset contained 16 scans from 8 healthy adults at different time periods, with the imager being a 7T Philips Healthcare MRI and the acquisition sequence being a T1 weighted Magnetization preparation fast Gradient Echo (MPRAGE). The specific data acquisition parameters are as follows: FOV 220mm × 220mm × 110mm, matrix size 224 × 224 × 110, TR 4ms, flip angle 7 °. Randomly select 9 scans of MR brain maps as training samples, 2 scans as validation data, and the remaining 5 scans as test samples. All the MR images are normalized to [0,1] in the gray scale range, each image is rotated and turned over at 90 degrees, and the data expansion is 8 times of the original data expansion. In the CS-MRI experiment, the data consistency layer is performed on k space, so that the image is not subjected to the dicing processing. The k-space undersampled templates used are the Radial sampling template and Random sampling template of 10%, 20% and 30%, corresponding to accelerations of 10, 5 and 3.33 times, respectively. In the SR-MRI experiment, an LR image is generated by simulating an HR image, the HR image is firstly blurred by using a Gaussian convolution kernel simulating a point spread function, and then a corresponding LR image is obtained by down-sampling in an image domain. For downsampling 2 × 2, 3 × 3, and 4 × 4, gaussian convolution kernels of sizes 3 × 3, 5 × 5, and 7 × 7, and standard deviations of 0.8, 1.0, and 1.6, respectively, are used. During the network training process, the LR/HR images corresponding to the down-sampling 2 × 2, 3 × 3 and 4 × 4 are cut into image blocks of size 34 × 34/68 × 68, 23 × 23/69 × 69 and 17 × 17/68 × 68, respectively.
In order to verify the generalization capability of the network and the diagnosis value of the network in the under-sampling reconstruction of clinical data, zero learning experiments are performed on the BraTS2018 data containing brain tumor focus information, namely, the brain tumor image is directly reconstructed by adopting network parameters trained by brain data of a healthy person. The BraTS2018 data, which is derived from a multi-modal brain tumor segmentation competition of international medical imaging and computer aided diagnosis conference in 2018, contains multi-modal brain tumor data from different sampling sequences under 19 institutional conventional 3T scanners, including glioblastoma lesion data of 210 patients and low-level glioma lesion data containing 75 patients, each patient data dimension being 240 × 240 × 155.
Except for 4 channels (corresponding to high-frequency features in four directions) of the feature reconstruction layer fearcon and output channels of the reconstruction layer Recon, the number of convolutional layer channels of the MRRN network is set to 64, and the size of each convolutional layer convolutional kernel is 3 × 3. The convolution kernel weight value is initialized by MSRA method (He K, Zhang X, Ren S, et al. Delvering deep inter receivers: Surpating human-level performance on ImageNet class. IEEEEEInternational Conference on Computer Vision Santiago, Chile,2015,1026-
Figure BDA0002541496680000101
ks denotes the convolution kernel size and chn denotes the number of channels. The learning rate is initialized to 10-4Multiply by 0.1 after 50 epochs, minipatch size set to 16, and train on the Tensorflow platform using an ADAM optimizer. For different sampling rates of each sampling template, the MRRN uses the same hyper-parameter, network structure and training samples to train 100 epochs respectively. The program runs a workstation using a 12GB NVIDIA Pascal Titan X GPU and 64GB RAM.
Table 1 lists the MRRN network structure comparisons for the CS-MRI method. The DC-CNN comprises 5 cascaded sub-networks, each sub-network comprising 5 convolutional layers. The MRRN has 27 convolutional layers, is deeper than the U-net with 25 DC-CNN layers and 23U-net layers, but the parameter quantity is less than one third of the DC-CNN, and is far lower than the U-net with massive parameters by 2 orders of magnitude. U-net has been widely used in medical image analysis due to its multi-scale network learning mode, but its huge parameters make it excessively burdensome to handle the medical image reconstruction problem with limited data volume, so that overfitting is likely to occur, and specific experimental analysis is described below.
TABLE 1 network architecture complexity contrast for CS-MRI
Figure BDA0002541496680000111
Table 2 lists the structural comparison of different CNN networks used for SR-MRI. The SRCNN consists of convolutional layers with 3 kernel sizes of 9 × 9, 5 × 5 and 5 × 5, and channel numbers of 64, 32 and 1, respectively, and has the shallowest depth and the least parameters among the three networks. Since for different up-down sampling sxs, the MRRN network adopts deconvolution kernels with different kernel sizes, so that the MRRN network has different parameters, but has the same network structure and depth. The MRRN depth is 31, which is about 50% more than the VDSR depth, however, even in S-4, the number of parameters is only about half of VDSR.
TABLE 2 network architecture complexity contrast for SR-MRI
Figure BDA0002541496680000112
Fig. 4 shows the CS-MRI reconstruction of MR brain maps of the Radial sampling template at sampling rates 10% and 20%, supplemented by the corresponding partial enlargements, absolute interpolation maps and PSNR/MSSIM values. The first column includes the fully sampled image, the partial magnified image, and the sampling template. The second column to the last column are respectively a reconstruction result, a local enlarged image and an absolute value difference map of different methods. The PSNR/MSSIM values are indicated above the corresponding images. It can be seen that the zero-filled reconstructed image has bar-shaped artifacts and blurring, and it is difficult to distinguish anatomical structure information especially at a sampling rate of 10%. At 10% sampling rate, the traditional optimization-based reconstruction method has completely lost its advantages, and although PANO is an advanced model in the optimization-based reconstruction method, it still has serious artifacts and appears unnatural. Compared with the traditional reconstruction method based on optimization, the reconstruction method based on DL has obvious advantages in visual effect and quantitative comparison. However, the U-net reconstructed image has serious artifacts, and compared with the DC-CNN and the MRRN, the reconstructed structure is not rich in detail. Although the DC-CNN has similar advantages compared with the MRRN reconstruction result, the MRRN method reconstruction result has the least undersampling artifact and the highest PSNR/MSSIM value. As shown by the arrows in the enlarged partial view, MRRN reconstructs richer structural details and relatively sharp edge structures.
Table 3 provides the PSNR/MSSIM values averaged under the test data set for each reconstruction method at a sampling rate of 10% -30% for Radial and Random undersampling. Under different sampling templates and sampling rates, the MRRN obtains the highest objective index value. With a 10% Radial undersampling template, MRRN is even higher than PSNR3dB for the traditional advanced PANO method, and MSSIM is nearly 0.2 increase. It should be noted that even though U-net has the largest number of network parameters, objective indexes under 20% and 30% of Random sampling templates are lower than PANO, which means that a huge network structure with a large number of parameters is not universal, and a more targeted network structure should be designed when solving specific problems.
TABLE 3 quantitative comparison of CS-MRI reconstructions under the healthy brain map test set
Figure BDA0002541496680000121
Note: the upper row is the psnr (db) value and the lower row is the MSSIM value. Bold indicates best reconstruction results.
Fig. 5 shows SR-MRI reconstruction results of 4 × 4 and 3 × 3 down-sampled brain maps and corresponding partial enlargements, absolute difference maps and PSNR/MSSIM values. From left to right are a high resolution reference picture and a corresponding low resolution picture, Bicubic interpolation, different SR method reconstruction results and a corresponding absolute value difference picture. The PSNR/MSSIM values are indicated above the corresponding images. It can be seen that the CNN based SR reconstruction method has better image contrast and naturally consistent details, especially for larger down-sampling 4 × 4, compared to the conventional optimization based method ScSR. And for the SR rebuilding method based on the CNN, deeper CNN networks such as VDSR and MRRN of the application have more excellent performance than shallow SRCNN. The MRRN network obtains the best reconstruction effect and the highest PSNR/MSSIM value under different downsampling. When downsampling is further increased, such as at 4 × 4 downsampling, the conventional methods ScSR and srncn have not reconstructed a reliable edge structure, as shown in the partially enlarged view. As can be seen from the comparison of the absolute difference map and the high resolution reference map, the MRRN can recover the detailed tissue information closer to the original image, such as gray matter and white matter boundaries, and fine vascular structures, etc., better under different down-sampling conditions, as indicated by the red arrows.
Table 4 lists the average PSNR/MSSIM values reconstructed by the different methods on the different downsampled test data sets. Under 4 x 4 sampling, the objective index PSNR/MSSIM of MRRN rises from 20.97dB/0.7736 of the traditional method ScSR to 26.12dB/0.8904, which is more than 2dB higher PSNR and 0.04 of MSSIM than VDSR based on depth CNN.
TABLE 4 quantitative comparison of healthy brain map SR-MRI reconstruction
Figure BDA0002541496680000131
Note: the upper row is psnr (db) and the lower row is MSSIM. Bold indicates the best reconstruction results.
Although the training samples do not contain data corresponding to a certain task, Zero-time Learning (Zero-Shot Learning) can be used for solving the task by models trained by other training samples. The MRRN model trained by the MR brain map of the healthy volunteers is directly used for CS-MRI reconstruction of the brain tumor data BraTS2018 test set, and the reconstruction results are shown in FIG. 6 and are respectively 10%, 20% and 30% of sampling rate from top to bottom. The MRRN reconstruction effect of the application is that under a Radial template with sampling rates of 20% and 30% and a Random template with sampling rates of 10% -30%, PSNR is 10dB higher than zero filling reconstruction, and MSSIM is improved by about 0.5. For the more difficult Radial sampling template at 10% sampling rate, PSNR and MSSIM are 5dB and 0.4 higher than zero-fill reconstruction, respectively. As can be seen from the figure, under the Radial and Random sampling templates with the sampling rate of 10% -30%, MRRN learned for zero times can achieve reliable reconstruction results, and obvious pathological features such as clear tumor boundaries and textures shown in local amplification regions are retained, so that the MRRN network is proved to have good generalization capability.
The above description is only an embodiment of the present invention, but the design concept of the present invention is not limited thereto, and any insubstantial modifications made by using the design concept should fall within the scope of infringing the present invention.

Claims (10)

1. A multi-task image reconstruction convolution network model based on functional modules is characterized in that: based on a recursive residual network, including an embedding sub-network, an inference sub-network and a reconstruction sub-network; the embedding sub-network is used for extracting the characteristics of an input low-quality image, the reasoning sub-network is used for learning residual information between the embedding sub-network and the reconstruction sub-network, the output of the reasoning sub-network and the output of the embedding sub-network are added through cross connection, and the input of the output of the reasoning sub-network and the output of the embedding sub-network is input into the reconstruction sub-network for image reconstruction.
2. The functional module-based multitask image reconstruction convolutional network model of claim 1, wherein: the embedded sub-network and the reconstruction sub-network are respectively provided with a self-adaptive error correction module so as to ensure the data fidelity between a high-quality image output by network reconstruction and a low-quality image input originally by a network.
3. The functional module-based multitask image reconstruction convolutional network model of claim 2, wherein: aiming at compressed sensing magnetic resonance imaging reconstruction, an error correction module of the network is designed based on data consistency so as to ensure consistency between the output of the network reconstruction and the acquired k-space data.
4. The functional module-based multitask image reconstruction convolutional network model of claim 3, wherein: in the compressed sensing magnetic resonance imaging reconstruction, a sub-network is embedded, and an adaptive error correction module is utilized to firstly carry out primary processing on an original low-quality image input by a network so as to remove part of undersampling artifacts, and then subsequent feature extraction is carried out; the self-adaptive error correction module comprises two pre-activated convolutional layers, a cross-connection layer and a data consistency layer; before a high-quality image is reconstructed by adopting a multi-layer supervision strategy, a self-adaptive error correction module of a reconstruction sub-network carries out deviation correction on the network prediction output by each residual error module so as to ensure that the reconstructed image and the acquired k-space data have consistency.
5. The functional module-based multitask image reconstruction convolutional network model of claim 2, wherein: aiming at super-resolution magnetic resonance imaging reconstruction, a self-adaptive error correction module of the network is designed by adopting a back projection method.
6. The functional module-based multitask image reconstruction convolutional network model of claim 5, wherein: in the super-resolution magnetic resonance imaging reconstruction, a self-adaptive error correction module embedded into a sub-network up-samples an HR characteristic space from an LR characteristic space in a back projection mode, then an up-sampling back projection unit is used for up-sampling operation, then high-resolution characteristics are extracted, an up-sampled HR characteristic domain is back projected to an original LR characteristic domain, and projection errors are calculated and merged into the HR characteristic space; the self-adaptive error correction module in the reconstruction sub-network reversely projects the HR characteristic diagram predicted by the network to an LR characteristic domain for error correction, the HR characteristic output by the embedded sub-network and the HR residual error characteristic predicted and reconstructed by the inference sub-network are added through cross connection to obtain an HR characteristic diagram predicted by the network, and then the HR characteristic diagram is input into the reverse projection module for reverse projection operation, so that the HR domain projection error and the HR characteristic diagram predicted by the network are added to obtain error correction output.
7. The functional module-based multitask image reconstruction convolutional network model of claim 1, wherein: when constructing the reasoning sub-network, using a recursive learning strategy; the recursive residual block comprises two pre-activated convolution layers and a cross-connection from an input end to an output end, the convolution operation uses a pre-activated learning strategy, and batch normalization and linear rectification processing are performed before each layer of convolution operation.
8. The functional module-based multitask image reconstruction convolutional network model according to claim 1, wherein a multi-layer supervision strategy is adopted, the output characteristics of each recursive residual block are all transmitted to a reconstruction sub-network, the reconstruction sub-network respectively performs convolution operation on the characteristics from n recursive residual blocks, then intermediate predicted values corresponding to each recursive residual block are calculated by adding the characteristics of cross-connection and embedding sub-networks, and the final network reconstruction is a weighted average of all intermediate predicted results.
9. The functional module-based multi-task image reconstruction convolutional network model of claim 1, further comprising a high frequency feature-guided functional module embedded in the reconstruction sub-network to constrain the reconstruction solution space and prevent prediction bias.
10. The functional module-based multitask image reconstruction convolutional network model according to claim 1, wherein said inference subnetwork uses dense connection mode, and uses the output of every previous residual block as input of current module, so as to facilitate the reuse of features and the gradual estimation of complex feature information.
CN202010548135.7A 2020-06-16 2020-06-16 Multi-task image reconstruction convolution network model based on functional module Pending CN111899165A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010548135.7A CN111899165A (en) 2020-06-16 2020-06-16 Multi-task image reconstruction convolution network model based on functional module

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010548135.7A CN111899165A (en) 2020-06-16 2020-06-16 Multi-task image reconstruction convolution network model based on functional module

Publications (1)

Publication Number Publication Date
CN111899165A true CN111899165A (en) 2020-11-06

Family

ID=73207647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010548135.7A Pending CN111899165A (en) 2020-06-16 2020-06-16 Multi-task image reconstruction convolution network model based on functional module

Country Status (1)

Country Link
CN (1) CN111899165A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096208A (en) * 2021-03-16 2021-07-09 天津大学 Reconstruction method of neural network magnetic resonance image based on double-domain alternating convolution
CN113506222A (en) * 2021-07-30 2021-10-15 合肥工业大学 Multi-mode image super-resolution method based on convolutional neural network
CN113506258A (en) * 2021-07-02 2021-10-15 中国科学院精密测量科学与技术创新研究院 Under-sampling lung gas MRI reconstruction method for multitask complex value deep learning
CN114582487A (en) * 2022-01-26 2022-06-03 北京博瑞彤芸科技股份有限公司 Traditional Chinese medicine diagnosis and treatment assisting method and system based on traditional Chinese medicine knowledge graph
CN114579710A (en) * 2022-03-15 2022-06-03 西南交通大学 Method for generating problem query template of high-speed train
CN114663288A (en) * 2022-04-11 2022-06-24 桂林电子科技大学 Single-axial head MRI (magnetic resonance imaging) super-resolution reconstruction method
WO2022257090A1 (en) * 2021-06-08 2022-12-15 苏州深透智能科技有限公司 Magnetic resonance imaging method and related device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460726A (en) * 2018-03-26 2018-08-28 厦门大学 A kind of magnetic resonance image super-resolution reconstruction method based on enhancing recurrence residual error network
CN109741260A (en) * 2018-12-29 2019-05-10 天津大学 A kind of efficient super-resolution method based on depth back projection network
CN110151181A (en) * 2019-04-16 2019-08-23 杭州电子科技大学 Rapid magnetic resonance imaging method based on the U-shaped network of recurrence residual error

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460726A (en) * 2018-03-26 2018-08-28 厦门大学 A kind of magnetic resonance image super-resolution reconstruction method based on enhancing recurrence residual error network
CN109741260A (en) * 2018-12-29 2019-05-10 天津大学 A kind of efficient super-resolution method based on depth back projection network
CN110151181A (en) * 2019-04-16 2019-08-23 杭州电子科技大学 Rapid magnetic resonance imaging method based on the U-shaped network of recurrence residual error

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIJUN BAO 等: "Undersampled MR image reconstruction using an enhanced recursive residual network", 《JOURNAL OF MAGNETIC RESONANCE》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096208A (en) * 2021-03-16 2021-07-09 天津大学 Reconstruction method of neural network magnetic resonance image based on double-domain alternating convolution
CN113096208B (en) * 2021-03-16 2022-11-18 天津大学 Reconstruction method of neural network magnetic resonance image based on double-domain alternating convolution
WO2022257090A1 (en) * 2021-06-08 2022-12-15 苏州深透智能科技有限公司 Magnetic resonance imaging method and related device
CN113506258A (en) * 2021-07-02 2021-10-15 中国科学院精密测量科学与技术创新研究院 Under-sampling lung gas MRI reconstruction method for multitask complex value deep learning
CN113506222A (en) * 2021-07-30 2021-10-15 合肥工业大学 Multi-mode image super-resolution method based on convolutional neural network
CN113506222B (en) * 2021-07-30 2024-03-01 合肥工业大学 Multi-mode image super-resolution method based on convolutional neural network
CN114582487A (en) * 2022-01-26 2022-06-03 北京博瑞彤芸科技股份有限公司 Traditional Chinese medicine diagnosis and treatment assisting method and system based on traditional Chinese medicine knowledge graph
CN114579710A (en) * 2022-03-15 2022-06-03 西南交通大学 Method for generating problem query template of high-speed train
CN114579710B (en) * 2022-03-15 2023-04-25 西南交通大学 Method for generating problem query template of high-speed train
CN114663288A (en) * 2022-04-11 2022-06-24 桂林电子科技大学 Single-axial head MRI (magnetic resonance imaging) super-resolution reconstruction method

Similar Documents

Publication Publication Date Title
CN108460726B (en) Magnetic resonance image super-resolution reconstruction method based on enhanced recursive residual network
CN111899165A (en) Multi-task image reconstruction convolution network model based on functional module
Li et al. A review of the deep learning methods for medical images super resolution problems
Sánchez et al. Brain MRI super-resolution using 3D generative adversarial networks
Du et al. Super-resolution reconstruction of single anisotropic 3D MR images using residual convolutional neural network
CN110827216B (en) Multi-generator generation countermeasure network learning method for image denoising
WO2022047625A1 (en) Image processing method and system, and computer storage medium
CN111028306A (en) AR2U-Net neural network-based rapid magnetic resonance imaging method
CN108416821B (en) A kind of CT Image Super-resolution Reconstruction method of deep neural network
Du et al. Accelerated super-resolution MR image reconstruction via a 3D densely connected deep convolutional neural network
CN111487573B (en) Enhanced residual error cascade network model for magnetic resonance undersampling imaging
CN109360152A (en) 3 d medical images super resolution ratio reconstruction method based on dense convolutional neural networks
Zhou et al. Volume upscaling with convolutional neural networks
CN109214989A (en) Single image super resolution ratio reconstruction method based on Orientation Features prediction priori
Fu et al. Residual scale attention network for arbitrary scale image super-resolution
CN111598964B (en) Quantitative magnetic susceptibility image reconstruction method based on space adaptive network
CN110322402A (en) Medical image super resolution ratio reconstruction method based on dense mixing attention network
CN116823625B (en) Cross-contrast magnetic resonance super-resolution method and system based on variational self-encoder
CN111986092B (en) Dual-network-based image super-resolution reconstruction method and system
Pham et al. Simultaneous super-resolution and segmentation using a generative adversarial network: Application to neonatal brain MRI
Wang et al. 3D dense convolutional neural network for fast and accurate single MR image super-resolution
Belov et al. Towards ultrafast MRI via extreme k-space undersampling and superresolution
Sander et al. Autoencoding low-resolution MRI for semantically smooth interpolation of anisotropic MRI
Sital et al. 3D medical image segmentation with labeled and unlabeled data using autoencoders at the example of liver segmentation in CT images
Wang et al. Brain MR image super-resolution using 3D feature attention network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201106