CN112634385B - Rapid magnetic resonance imaging method based on deep Laplace network - Google Patents
Rapid magnetic resonance imaging method based on deep Laplace network Download PDFInfo
- Publication number
- CN112634385B CN112634385B CN202011102983.1A CN202011102983A CN112634385B CN 112634385 B CN112634385 B CN 112634385B CN 202011102983 A CN202011102983 A CN 202011102983A CN 112634385 B CN112634385 B CN 112634385B
- Authority
- CN
- China
- Prior art keywords
- image
- network
- laplace
- deep
- lap
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000002595 magnetic resonance imaging Methods 0.000 title claims abstract description 15
- 238000012549 training Methods 0.000 claims description 23
- 239000011159 matrix material Substances 0.000 claims description 15
- 238000010200 validation analysis Methods 0.000 claims description 6
- 238000012795 verification Methods 0.000 claims description 6
- 238000010276 construction Methods 0.000 claims description 4
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 238000001303 quality assessment method Methods 0.000 claims description 3
- 238000000638 solvent extraction Methods 0.000 claims description 3
- 230000001131 transforming effect Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 8
- 238000013135 deep learning Methods 0.000 abstract description 5
- 238000005070 sampling Methods 0.000 abstract description 4
- 238000005516 engineering process Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 6
- 238000003384 imaging method Methods 0.000 description 6
- 238000013507 mapping Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000011503 in vivo imaging Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/005—Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/008—Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
A fast magnetic resonance imaging method based on a depth Laplace network comprises the following steps: step 1 inputting a u-fold undersampled MR imageStep2 by deep Laplace networkObtaining a reconstructed MR imageFIG. 1 shows an example of a deep Laplace network f Lap (.; Θ). The invention combines the sampling principle of the MR image with the deep learning technology, and performs layer-by-layer reconstruction on the high-power undersampled MR image based on the Laplacian pyramid idea, thereby effectively reducing the influence of a large amount of artifacts in input data on the reconstruction effect and effectively improving the reconstruction speed and accuracy of the MR image.
Description
Technical Field
The invention relates to the field of medical image processing, in particular to the technical field of medical images of magnetic resonance imaging, which is mainly used for realizing the accelerated reconstruction of medical magnetic resonance imaging.
Background
Magnetic resonance imaging (Magnetic Resonance Imaging, MRI) is widely used as a medical in vivo imaging technique due to its advantages of non-invasiveness, high resolution, biosafety, and excellent soft tissue contrast. Although MRI has little radiation hazard to the human body, its acquisition time is long (typically over 30 minutes), which can lead to three problems: (1) higher inspection costs are incurred; (2) Can greatly affect patient comfort and compliance, resulting in localized artifacts that can cause MR (Magnetic Resonance ) images due to involuntary patient movement; (3) It is difficult to apply to disease screening, such as stroke, where diagnostic time is a high requirement. Since the advent of MRI in the 1970 s, improving imaging speed has been a major ongoing research goal, and is one of the recent research hotspots.
Conventional MR rapid imaging techniques are mainly based on compressive sensing (Compressed Sensing, CS). The compression sensing realizes the multiple shortening of the magnetic resonance scanning time by directly collecting the compressed image. However, compressive sensing imaging has not been widely used in the magnetic resonance field for more than a decade due to factors such as data acquisition, spatial conversion, reconstruction algorithms, and processing systems.
In the last three years, with the rapid development of deep learning in the field of natural image processing, research and application thereof in the field of medical images have also received a great deal of attention. Compared with the traditional MR rapid imaging method based on compressive sensing, the MR rapid imaging method based on deep learning has higher accuracy and faster speed, which indicates a new direction for rapid MR imaging. Existing deep learning algorithms are basically direct learning of the mapping from high-power undersampled MR data to fully sampled MR data, which often requires a large amount of zero padding of the high-power undersampled MR data to meet the needs of network training. However, a large amount of zero padding not only adds additional computational overhead, but also introduces significant artifacts in the reconstructed MR image, resulting in a lower signal-to-noise ratio of the reconstructed MR image.
Disclosure of Invention
In order to overcome the defects of the prior art, in order to eliminate a large amount of artifacts existing in the high-power undersampled MR image and efficiently learn complex end-to-end mapping by utilizing limited network capacity, the invention combines a depth cascade network structure and a Laplacian pyramid idea, provides a rapid magnetic resonance imaging method based on a depth Laplacian network, combines a sampling principle of the MR image with a depth learning technology, and performs layer-by-layer reconstruction on the high-power undersampled MR image based on the Laplacian pyramid idea, so that the influence of a large amount of artifacts in input data on reconstruction effects can be effectively reduced, and the reconstruction speed and accuracy of the MR image can be effectively improved.
The technical scheme adopted for solving the technical problems is as follows:
A method of fast magnetic resonance imaging based on a deep laplace network (DEEP LAPLACIAN network, deepLap), comprising the steps of:
step 1 inputting a u-fold undersampled MR image For an MR image undersampled by a factor u in k-space,/>Representing k-space in which the MR image is located,/>Representing the dimension of y u, N is the dimension of the fully sampled MR image; the undersampled MR image y u is obtained by:
yu=Muy=MuFx,
Wherein, A fully sampled image representing y u in k-space,/>Representing a fully sampled image of y u in pixel space,/>Representing the pixel space in which the MR image is located,/>Is a fourier transform matrix for transforming the MR image from pixel space to k-space; m u∈{0,1}M×N denotes a mask matrix that undersamples y by a factor of u;
step2 by deep Laplace network Obtaining a reconstructed MR image
The deep Laplace network f Lap (.; Θ) is defined as follows:
Where Θ= { Θ 0,Θ1,...,ΘD } is a parameter set of the network, H represents Hermitian transpose operation, D is the number of sub-networks of f Lap (·; Θ)), and f d(·;Θd) is a sub-network of the deep laplace network, defined as follows:
Here, D e {1,2,..d }, For the upsampling operator, we mean that the u d-1 times undersampled MR image in k-space is mapped to the u d times undersampled MR image in pixel space, u d<ud-1,/>Is a parameter thereof; /(I)A reconstruction module, which represents a network module that performs reconstruction computation on the MR image in the pixel domain, and θ d is a parameter thereof; /(I)Representing data consistency operators, let/> Element/>Calculated by the following method:
Wherein, Ω represents the index subset used to construct the elements in y of y u.
Further, in the step 2, the parameter set Θ in the deep laplace network f Lap (·; Θ) is trained by the following steps:
step 2.1 construction of MR image set for training depth Laplace network f Lap (.; Θ)
Here Γ= {1,2,.., For the MR image sampled at the gamma Zhang Quan th,
Wherein,A mask matrix representing u d times undersampling of MR images in k-space;
Step 2.2 build a validation set of deep Laplace network f Lap (.; Θ):
Wherein, Λ= {1, 2..k } is the index set of verification samples, K is the number of verification samples;
step 2.3 let the iteration period number (Epoch) e=0, the iteration number t=0, and the parameter set Θ of the random initialization f Lap (; Θ) be Θ (0):
step 2.4 let e=e+1, training batch b=0, randomly partitioning the index set Γ of training samples into B disjoint subsets:
Step 2.5 let t=t+1, b=b+1, D=1, 2, D, calculating
Wherein,
Step 2.6 minimizes the following loss function by gradient descent method:
Wherein l (·, ·) represents a distance metric operation between the two images;
Step 2.7 evaluate the quality of the deep laplace network model f Lap(·;Θ(t)) on the validation set:
wherein Q (·, ·) is a quality assessment function;
Step 2.8, iteratively executing the steps 2.5 to 2.7 until b=b;
step 2.9, iteratively executing the steps 2.4 to 2.8 until e=e, where E is a preset maximum training period number;
Step 2.10 selects the optimal parameter set Θ=Θ (t*) for the deep laplace network f Lap (·; Θ), where,
The technical conception of the invention is as follows: the existing fast magnetic resonance imaging based on deep learning mainly directly learns the mapping relation from undersampled data to full sampled data. This requires a large amount of zero padding of the incoming undersampled data to conform the data to the network training requirements. However, a large amount of zero padding can cause serious artifacts in the generated magnetic resonance image at the beginning, so that subsequent end-to-end learning of the network is greatly interfered, and the reconstruction effect of the network is seriously affected. In order to reduce the difficulty of network learning, the invention provides a deep cascade network formed by a plurality of subnets based on the idea of the Laplacian pyramid, and zero filling is gradually carried out on an input high-power undersampled image, so that each subnet is used for centrally learning the mapping relation of different parts, and the final reconstruction effect is improved.
The beneficial effects of the invention are mainly shown in the following steps: (1) The high-power undersampled magnetic resonance image is gradually reconstructed from low frequency to high frequency based on the depth Laplace network, so that the reconstruction effect of the high-power undersampled magnetic resonance image can be effectively improved; (2) Since only a portion (but not all) of the undersampled data is reconstructed per subnet, the proposed method has a faster reconstruction speed.
Drawings
Fig. 1 is a block diagram of a deep laplace network designed according to the present invention, where y u is the u-fold undersampled MR image in input k-space,For the MR image in the reconstructed pixel space of the output,/> For the intermediate output result of the d-th subnet, U d is the upsampling operator of the d-th subnet, R d is the reconstruction module of the d-th subnet, conv d,j represents the j-th convolution layer of R d, F DC is the data consistency operator, PZF represents the partial zero padding, and F H represents the inverse fourier transform.
Fig. 2 is an example of an MR image and a fully sampled MR image, wherein (a) and (b) are examples of an 8 x undersampled MR image and a fully sampled MR image in k-space, respectively, and (c) and (d) are examples of a reconstructed MR image and a fully sampled MR image in pixel space, respectively.
Fig. 3 is an example of a mask matrix, where (a) is an example of a center undersampled mask matrix and (b) is an example of a gaussian undersampled mask matrix.
Fig. 4 is two examples of upsampling operators used in fig. 1, where (a) is a bicubic interpolation based upsampling operator, where,Representing a bicubic interpolation operator; (b) Is based on a partial zero-padding up-sampling operator, where PZF represents the partial zero-padding and F H represents the inverse fourier transform.
FIG. 5 is an effect presentation of the data consistency operator f DC, where the first, second, and third rows represent the MR image before f DC, the MR image after f DC, the fully sampled MR image, respectively; the first, second, and third columns represent MR images output by three subnets of the depth laplace network, respectively.
Fig. 6 is an example of 5 MR training samples for training a deep laplace network, one for each row, each consisting of 5 MR images: u times undersampled MR images in k-space (column 1), u 1 = 6 times, u 2 = 4 times, u 3 = 2 times undersampled MR images in pixel space (columns 2 to 4), fully sampled MR images (last column).
Fig. 7 is a quantized comparison of PSNR/SSIM values of reconstruction results of the deep laplace network (DeepLap), the conventional padding method (ZF), and the 4-preamble method DEEPCASCADE, REFINEGAN, UNET, DENSEUNET proposed by the present invention. Using MR images of 8 subjects as training samples, MR images of the remaining 2 subjects as test samples, columns 2 through 6 of fig. 7 report PSNR/SSIM values reconstructed from 16-fold undersampled MR images of subjects numbered 2 and 3, 0 and 1, 4 and 5, 6 and 7, 8 and 9, respectively.
Fig. 8 is a comparison of the reconstruction effect, from left to right, of the MR image of full sampling and the reconstructed image of ZF, deepLap, deepCascade, refineGAN, UNet, denseUNet methods, respectively.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1-8, a fast magnetic resonance imaging method based on a deep laplace network (DEEP LAPLACIAN network, deepLap) includes the steps of:
step 1 inputting a u-fold undersampled MR image
Step2 by deep Laplace networkObtaining a reconstructed MR imageFIG. 1 shows an example of a deep Laplace network f Lap (.; Θ).
Further, in the step2,For an MR image undersampled by a factor u in k-space, κ represents the k-space in which the MR image is located,/>Representing the dimension of y u, N is the dimension of the fully sampled MR image; the undersampled MR image has the following relationship with the fully sampled MR image:
yu=Muy=MuFx,
Wherein, A fully sampled image representing y u in k-space,/>Representing a fully sampled image of y u in pixel space,/>Representing the pixel space in which the MR image is located,/>Is a fourier transform matrix for transforming the MR image from pixel space to k-space; m u∈{0,1}M×N denotes a mask matrix that undersamples y by a factor of u. An example of u=8 times undersampled y u is given in fig. 2 (a), and examples of fully sampled (n=336) MR images y and x in k-space and pixel space, respectively, are given in fig. 2 (b) and (d); mask matrix M u may use a center undersampling matrix or a Gaussian undersampling matrix, as shown in FIG. 3, where (a) in FIG. 2 is an MR image in k-space resulting from center undersampling of (b) in FIG. 2.
Further, in the step 2, the deep Laplace network f Lap (.; Θ) is defined as follows:
Where Θ= { Θ 0,Θ1,...,ΘD } is a parameter set of the network, H represents Hermitian transpose operation, D is the number of sub-networks of f Lap (·; Θ)), and f d(·;Θd) is a sub-network of the deep laplace network, defined as follows:
Here, D e {1,2,..d }, For the upsampling operator, we mean that the u d-1 times undersampled MR image in k-space is mapped to the u d times undersampled MR image in pixel space, u d<ud-1,/>Is a parameter thereof; /(I)A reconstruction module, which represents a network module that performs reconstruction computation on the MR image in the pixel domain, and θ d is a parameter thereof; /(I)Representing data consistency operators, let/> Element/>Calculated by the following method:
Wherein, Ω represents the index subset used to construct the elements in y of y u. Fig. 1 shows an example of a deep laplace network f Lap (·; Θ): the depth D can be 3 or 5; the upsampling operator U d of each subnet f d(·;Θd) of f Lap (·; Θ) is a core operator, and bicubic interpolation or partial zero padding may be used, examples of bicubic interpolation and partial zero padding are given in (a) and (b) of FIG. 4, respectively, and U d of FIG. 1 uses the partial zero padding; the reconstruction module R d may use a residual convolution module or a self-coding module, and in fig. 1, a residual convolution module (taking R 1 as an example) is used: the main body part of the device consists of j (j can take 5 or 7) cascaded convolution layers, the size of each convolution kernel is a multiplied by a (a can take 3 or 5), and the output of the last convolution layer is added with the input of the convolution module; the data consistency operator f DC further processes the output of Rd, and the first and second rows of FIG. 5 compare the changes in MR images before and after the three sub-networks of f Lap (.; Θ) use f DC.
Further, FIG. 2 (c) shows an MR image reconstructed from the 8 times undersampled y u shown in FIG. 2 (a) using the depth Laplace network f Lap (. Theta.; Θ) described aboveIs an example of (a).
Further, in the step 2, the parameter set Θ in the deep laplace network f Lap (·; Θ) is trained by the following steps:
step 2.1 construction of MR image set for training depth Laplace network f Lap (.; Θ)
Here Γ= {1,2,..l } is the index set of training samples, L is the number of training samples, u=u 0>u1>u2>...>uD is the different undersampling multiple of the MR image, For the MR image sampled at the gamma Zhang Quan th,
Wherein,A mask matrix representing u d times undersampling of MR images in k-space; FIG. 6 shows the method for construction/>Is an example of 5 training samples;
Step 2.2 build a validation set of deep Laplace network f Lap (.; Θ):
Wherein, Λ= {1, 2..k } is the index set of verification samples, K is the number of verification samples;
step 2.3 let the iteration period number (Epoch) e=0, the iteration number t=0, and the parameter set Θ of the random initialization f Lap (; Θ) be Θ (0):
step 2.4 let e=e+1, training batch b=0, randomly partitioning the index set Γ of training samples into B disjoint subsets: Wherein the Batch Size (Batch Size), i.e. >, is > 6 Or 8;
Step 2.5 let t=t+1, b=b+1, Calculation of
Wherein,
Step 2.6 minimizes the following loss function by gradient descent method:
Where l (·, ·) represents the distance metric operation between the two images, either the l 1 norm, the l 2 norm, or the Charbonnier penalty function may be employed, where Charbonnier penalty function is defined as:
Step 2.7 evaluate the quality of the deep laplace network model f Lap(·;Θ(t)) on the validation set:
Wherein, Q (·, ·) is a quality assessment function, which may be a peak signal-to-noise ratio (PSNR) or a Structured Similarity (SSIM);
Step 2.8, iteratively executing the steps 2.5 to 2.7 until b=b;
Step 2.9, iteratively executing the steps 2.4 to 2.8 until e=e, where E is a preset maximum training period number, and e=100 may be set;
step 2.10 selecting an optimal parameter set of the deep Laplace network f Lap (; Θ) Wherein,
The beneficial effects of the deep Laplace network provided by the invention are verified through experiments. 10 subjects were scanned using a philips Ingenia3T scanner, MR images of the T2WI modality were acquired, and each MR slice was cut to a size of 336 x 261. The U-Net network was trained using the Adam optimization algorithm, with momentum set to 0.9, batch size set to 6, initial learning rate set to 0.001, and 10-fold drop after 50 epochs per iteration, 100-Epoch iterations were performed.
In order to quantitatively evaluate the reconstruction performance of the network, the reconstructed MR images are quality assessed using peak signal-to-noise ratio (PSNR) and Structural Similarity (SSIM). The proposed deep laplace network (DeepLap) is compared with the padding method (Zero padding, ZF) and the 4-preamble method DEEPCASCADE, REFINEGAN, UNET, DENSEUNET. The MR images of 8 subjects were used as training samples, leaving the MR images of 2 subjects as test samples. The stability of the compared methods was tested using a cross-validation method, and 16-fold undersampled MR images of subjects numbered 2 and 3, 0 and 1, 4 and 5, 6 and 7, 8 and 9 were reconstructed, respectively. Comparing the reconstructed result with the fully sampled image, the PSNR/SSIM values are shown in FIG. 7, and it can be seen that the proposed method exceeds other methods.
Fig. 8 further shows a graph of the re-effects of the compared methods, from left to right, for the fully sampled MR image and for the reconstructed images of the ZF, deepLap, deepCascade, refineGAN, UNet, denseUNet methods, respectively. It can be seen that the DeepLap method proposed by the present invention can better reconstruct the detailed information of the high-power undersampled MR image.
The embodiments described in this specification are merely illustrative of the manner in which the inventive concepts may be implemented. The scope of the present invention should not be construed as being limited to the specific forms set forth in the embodiments, but the scope of the present invention and the equivalents thereof as would occur to one skilled in the art based on the inventive concept.
Claims (1)
1. A depth laplace network-based rapid magnetic resonance imaging method, comprising the steps of:
step 1 inputting a u-fold undersampled MR image For an MR image undersampled by a factor u in k-space,/>Representing k-space in which the MR image is located,/>Representing the dimension of y u, N is the dimension of the fully sampled MR image; the undersampled MR image y u is obtained by:
yu=Muy=MuFx,
Wherein, A fully sampled image representing y u in k-space,/>
Representing a fully sampled image of y u in pixel space,Representing the pixel space in which the MR image is located,/>Is a fourier transform matrix for transforming the MR image from pixel space to k-space; m u∈{0,1}M×N denotes a mask matrix that undersamples y by a factor of u;
step 2 consists of a deep laplace network f Lap (.; Θ): Obtaining a reconstructed MR image
The deep Laplace network f Lap (.; Θ) is defined as follows:
fLap(yu;Θ)=FHfD(fD-1((…f2(f1(yu;Θ1);Θ2));ΘD-1);ΘD),
Where Θ= { Θ 0,Θ1,…,ΘD } is a parameter set of the network, H represents Hermitian transpose operation, D is the number of sub-networks of f Lap (·; Θ)), and f d(·;Θd) is a sub-network of the deep laplace network, defined as follows:
Here, D ε {1,2, …, D }, For the upsampling operator, we mean that the u d-1 times undersampled MR image in k-space is mapped to the u d times undersampled MR image in pixel space, u d<ud-1,/>Is a parameter thereof; r d(·;θd)/>A reconstruction module, which represents a network module that performs reconstruction computation on the MR image in the pixel domain, and θ d is a parameter thereof; /(I)Representing data consistency operators, let/> Element/>Calculated by the following method:
Wherein, Ω represents the index subset used to construct the elements in y of y u;
In the step 2, the parameter set Θ in the deep Laplace network f Lap (.; Θ) is trained by the following steps:
step 2.1 construction of MR image set for training depth Laplace network f Lap (.; Θ)
Here Γ= {1,2, …, L } is the index set of training samples, L is the number of training samples, u=u 0>u1>u2>…>uD is a different undersampling multiple of the MR image, For the MR image sampled at the gamma Zhang Quan th,
Wherein,Mask matrix representing u d times undersampling of MR images in k-space,/>
Step 2.2 build a validation set of deep Laplace network f Lap (.; Θ):
Wherein, Λ= {1,2, …, K } is the index set of verification samples, K is the number of verification samples;
Step 2.3 let the iteration period number e=0, the iteration number t=0, and the parameter set Θ of the random initialization f Lap (·; Θ) is Θ (0):
step 2.4 let e=e+1, training batch b=0, randomly partitioning the index set Γ of training samples into B disjoint subsets:
Step 2.5 let t=t+1, b=b+1, Calculation of
Wherein,
Step 2.6 minimizes the following loss function by gradient descent method:
Wherein l (·, ·) represents a distance metric operation between the two images;
Step 2.7 evaluate the quality of the deep laplace network model f Lap(·;Θ(t)) on the validation set:
wherein Q (·, ·) is a quality assessment function;
Step 2.8, iteratively executing the steps 2.5 to 2.7 until b=b;
step 2.9, iteratively executing the steps 2.4 to 2.8 until e=e, where E is a preset maximum training period number;
step 2.10 selecting an optimal parameter set of the deep Laplace network f Lap (; Θ) Wherein,
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011102983.1A CN112634385B (en) | 2020-10-15 | 2020-10-15 | Rapid magnetic resonance imaging method based on deep Laplace network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011102983.1A CN112634385B (en) | 2020-10-15 | 2020-10-15 | Rapid magnetic resonance imaging method based on deep Laplace network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112634385A CN112634385A (en) | 2021-04-09 |
CN112634385B true CN112634385B (en) | 2024-05-10 |
Family
ID=75302911
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011102983.1A Active CN112634385B (en) | 2020-10-15 | 2020-10-15 | Rapid magnetic resonance imaging method based on deep Laplace network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112634385B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116012472A (en) * | 2022-12-02 | 2023-04-25 | 中国科学院深圳先进技术研究院 | Quick magnetic resonance imaging method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102389309A (en) * | 2011-07-08 | 2012-03-28 | 首都医科大学 | Compressed sensing theory-based reconstruction method of magnetic resonance image |
CN110378980A (en) * | 2019-07-16 | 2019-10-25 | 厦门大学 | A kind of multi-channel magnetic resonance image rebuilding method based on deep learning |
CN110916664A (en) * | 2019-12-10 | 2020-03-27 | 电子科技大学 | Rapid magnetic resonance image reconstruction method based on deep learning |
CN111487573A (en) * | 2020-05-18 | 2020-08-04 | 厦门大学 | Enhanced residual error cascade network model for magnetic resonance undersampling imaging |
-
2020
- 2020-10-15 CN CN202011102983.1A patent/CN112634385B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102389309A (en) * | 2011-07-08 | 2012-03-28 | 首都医科大学 | Compressed sensing theory-based reconstruction method of magnetic resonance image |
CN110378980A (en) * | 2019-07-16 | 2019-10-25 | 厦门大学 | A kind of multi-channel magnetic resonance image rebuilding method based on deep learning |
CN110916664A (en) * | 2019-12-10 | 2020-03-27 | 电子科技大学 | Rapid magnetic resonance image reconstruction method based on deep learning |
CN111487573A (en) * | 2020-05-18 | 2020-08-04 | 厦门大学 | Enhanced residual error cascade network model for magnetic resonance undersampling imaging |
Non-Patent Citations (4)
Title |
---|
"Deep Inception-Residual Laplacian Pyramid Networks for Accurate Single-Image Super-Resolution";Yongliang Tang等;《IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》;20190628;第31卷(第4期);第2、4-5页 * |
"一种拉普拉斯金字塔结构的团网络超分辨率图像重建算法";贾婷婷等;《小型微型计算机系统》;20190831;第40卷(第8期);第1760-1766页 * |
"基于深度递归级联卷积神经网络的并行磁共振成像方法";程慧涛等;《波谱学杂志》;20190530;第36卷(第4期);第439-440页 * |
基于深度堆叠卷积神经网络的图像融合;韩泽等;《计算机学报》;20171130;第40卷(第11期);第2506-2518页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112634385A (en) | 2021-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109325985B (en) | Magnetic resonance image reconstruction method, apparatus and computer readable storage medium | |
CN113077527B (en) | Rapid magnetic resonance image reconstruction method based on undersampling | |
Souza et al. | A hybrid, dual domain, cascade of convolutional neural networks for magnetic resonance image reconstruction | |
Souza et al. | Dual-domain cascade of U-nets for multi-channel magnetic resonance image reconstruction | |
Cheng et al. | Highly scalable image reconstruction using deep neural networks with bandpass filtering | |
Chen et al. | A novel method and fast algorithm for MR image reconstruction with significantly under-sampled data | |
CN107274462B (en) | Classified multi-dictionary learning magnetic resonance image reconstruction method based on entropy and geometric direction | |
CN113096208B (en) | Reconstruction method of neural network magnetic resonance image based on double-domain alternating convolution | |
Chen et al. | Fast algorithms for image reconstruction with application to partially parallel MR imaging | |
CN111932650B (en) | Nuclear magnetic resonance image reconstruction method based on high-flux depth expansion network | |
CN107991636B (en) | Rapid magnetic resonance image reconstruction method based on adaptive structure low-rank matrix | |
CN104569880B (en) | A kind of magnetic resonance fast imaging method and system | |
CN113971706B (en) | Rapid magnetic resonance intelligent imaging method | |
CN104574456A (en) | Graph regularization sparse coding-based magnetic resonance super-undersampled K data imaging method | |
KR20220082302A (en) | MAGNETIC RESONANCE IMAGE PROCESSING APPARATUS AND METHOD USING ARTIFICIAL NEURAL NETWORK AND RESCAlING | |
Ouchi et al. | Reconstruction of compressed-sensing MR imaging using deep residual learning in the image domain | |
CN109934884B (en) | Iterative self-consistency parallel imaging reconstruction method based on transform learning and joint sparsity | |
CN109920017B (en) | Parallel magnetic resonance imaging reconstruction method of joint total variation Lp pseudo norm based on self-consistency of feature vector | |
CN114972570A (en) | Image reconstruction method and device | |
CN112634385B (en) | Rapid magnetic resonance imaging method based on deep Laplace network | |
CN109188327B (en) | Magnetic resonance image fast reconstruction method based on tensor product complex small compact framework | |
Dhengre et al. | K sparse autoencoder-based accelerated reconstruction of magnetic resonance imaging | |
Hou et al. | Pncs: Pixel-level non-local method based compressed sensing undersampled mri image reconstruction | |
CN112489150B (en) | Multi-scale sequential training method of deep neural network for rapid MRI | |
Zong et al. | Fast reconstruction of highly undersampled MR images using one and two dimensional principal component analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |