CN113487507A - Dual-domain recursive network MR reconstruction method based on multi-module feature aggregation - Google Patents

Dual-domain recursive network MR reconstruction method based on multi-module feature aggregation Download PDF

Info

Publication number
CN113487507A
CN113487507A CN202110766581.XA CN202110766581A CN113487507A CN 113487507 A CN113487507 A CN 113487507A CN 202110766581 A CN202110766581 A CN 202110766581A CN 113487507 A CN113487507 A CN 113487507A
Authority
CN
China
Prior art keywords
data
network
domain
feature
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110766581.XA
Other languages
Chinese (zh)
Inventor
郑峰
刘晓芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Jiliang University
Original Assignee
China Jiliang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Jiliang University filed Critical China Jiliang University
Priority to CN202110766581.XA priority Critical patent/CN113487507A/en
Publication of CN113487507A publication Critical patent/CN113487507A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks

Abstract

The invention requests to protect a dual-domain recursive network MR reconstruction method based on feature aggregation. The method comprises the following implementation steps: 1) preprocessing the data of the K space, including constructing an undersampling template, simulating an undersampling process, normalizing, intercepting slices, performing two-dimensional wavelet transformation, and constructing a training set and a test set; 2) constructing a network model, wherein the model comprises a wavelet domain multi-module feature fusion network and an image domain multi-module feature fusion network; 3) training a neural network, namely taking the processed wavelet domain data, an undersampled template (mask) and undersampled K space data as the input of the network, and taking the original magnetic resonance image data as GT to train the model; 4) the network model is tested, namely, under-sampled wavelet domain data of a test set are input into the model to be tested to obtain a high-quality reconstructed image; 5) and evaluating the quality of the reconstructed image, namely evaluating the quality of the reconstructed image by using the PSNR and the SSIM in the experiment.

Description

Dual-domain recursive network MR reconstruction method based on multi-module feature aggregation
Technical Field
The invention belongs to the technical field of medical image processing, and particularly relates to a multi-module feature aggregation dual-domain recursive network MR reconstruction method.
Background
Magnetic Resonance Imaging (MRI) is a non-invasive in vivo Imaging technique, in which nuclei in tissue cells resonate under the action of a strong Magnetic field, and then are acquired and processed by a computer to form a Magnetic Resonance medical image. Compared with CT images, the method has the characteristics of no radiation, high contrast, multi-parameter imaging and the like, and lesion positions can be found more easily. However, the magnetic resonance scanning speed is slow, so that motion artifacts are generated, and therefore the main problem solved by the magnetic resonance imaging method is the reconstruction speed and quality of the magnetic resonance images.
The conventional magnetic resonance reconstruction method is based on Compressed Sensing (CS) theory, signal acquisition can be performed under nyquist sampling law, CS _ MRI utilizes sparsity of data and non-correlation of data sampling, sparse signals are Compressed and sampled through an observation matrix, and an MR image is reconstructed from undersampled signals through nonlinear optimization, but the reconstruction speed and accuracy of the conventional method are not high. In recent years, deep learning methods have been applied to magnetic resonance image reconstruction, and compared with traditional methods, deep learning forms more abstract feature information by extracting shallow features of data, so that MR reconstruction accuracy can be effectively improved. The method is improved based on the existing deep learning reconstruction method, a wavelet domain is introduced and combined with an image domain, multi-module feature aggregation is respectively carried out on a plurality of modules of the wavelet domain and a plurality of modules of the image domain, and reconstruction is carried out on a plurality of MR slices. In the image preprocessing stage, a certain proportion of low-frequency information is kept to maintain the basic structure of the image, a large amount of high-frequency information containing organization details is lost, and a high-quality and high-precision magnetic resonance image can be reconstructed after network training. It is intended herein to find other solutions for magnetic resonance reconstruction.
Disclosure of Invention
The invention provides a double-domain recursive network MR reconstruction method based on multi-module feature aggregation, which is used for finding another idea for reconstructing an MR image.
In order to achieve the above object, the present invention comprises the steps of:
s1) constructing data and preprocessing: simulating an undersampling process by an undersampling template (mask) under different acceleration factors on an original K space data set to obtain undersampled K space data, converting the undersampled K space data set into an image domain data set through Inverse Fourier Transform (IFT), then carrying out operations such as normalization, sequence interception and the like, and converting the image domain data set into a Wavelet domain data set through two-Dimensional Wavelet Transform (DWT), wherein the data set is divided into a training set and a testing set.
S2) construction of a reconstruction model: a multi-module feature fusion-based dual-domain recursive network is built through a deep learning frame PyTorch and used for MR image reconstruction, and the network comprises three parts, namely a wavelet domain multi-module feature aggregation network, an image domain multi-module feature aggregation network, a construction loss function and a data consistency layer (DC).
1) Wavelet domain multi-module feature aggregation network: the network architecture comprises three repeated wavelet domain residual modules, each basic wavelet domain residual module consists of a 3 × 3 convolution and three parallel hole convolutions, the convolution kernel is 3 × 3, two 1 × 1 convolutions are formed, the expansion rate of each hole convolution is 1, the expansion rate of each hole convolution is 2, the expansion rate of each hole convolution is 4, and the convolution kernel is 3 × 3. Firstly, conducting 3 multiplied by 3 convolution on undersampled wavelet data, extracting shallow layer characteristics, then conducting convolution with three different expansion rates, fusing information with different scales by using 1 multiplied by 1 convolution, and finally conducting residual connection, wherein the output of three wavelet domain residual blocks are respectively F0,F1,F2Will F0,F1,F2The method is spliced together according to the channel dimensions, the feature information of different modules is fused together by using 1 multiplied by 1 convolution, the feature fusion of the multi-module wavelet domain is realized, the network fuses different context information, the feature expression can be refined, and finally cross-layer jump connection is carried out.
2) Wavelet domain data consistency layer: the wavelet domain of the network reconstruction is converted into image domain data through wavelet inverse transformation, and then the image domain data is subjected to Fourier inverse transformation to realize the conversion from the image domain to the K space domain, and the undersampled K space data contains accurate information of the network reconstruction K space domain, so that the corresponding position of the network reconstruction K space is replaced by the undersampled K space, and the formula is as follows:
Figure BDA0003151908620000021
wherein WDC represents wavelet domain data consistency, WHRespectively representing a two-dimensional wavelet transform and an inverse two-dimensional wavelet transform, fcnn(wu| θ) represents network reconstruction data, wuDenotes the initial undersampled data, F represents the two-dimensional Fourier transform, y MK0M stands for undersamplingTemplate (mask), K0Representing initial K-space data, δ is a weight parameter, j is a point on K-space, and Ω is all under-sampled points in K-space.
3) Image domain multi-module feature aggregation network: the network architecture comprises three Bidirectional Feature Fusion (BFF) modules for learning Feature information of adjacent slices, wherein the output of a wavelet domain reconstruction network is converted into image domain data through wavelet inverse transformation and used as the input of the image domain reconstruction network, each basic module is used for performing 3 x 3 convolution operation on each slice of the input data, performing shallow Feature extraction, and then performing 3 x 3 convolution operation again to respectively obtain a Feature map M1,M2,...,MnThen let A1=M1,A1Extracting by 3 x 3 convolution characteristic and M2Feature fusion to obtain A2A is2Extracting by 3 x 3 convolution characteristic and M3Feature fusion to obtain A3By analogy, An-1Extracting by 3 x 3 convolution characteristic and MnFeature fusion to obtain AnThe module uses a LeakRelu function to correct, and the formula is as follows:
Figure BDA0003151908620000031
then, the slice dimension is reversely operated, each slice feature in the reverse direction of the slice dimension is convoluted by 3 multiplied by 3 to respectively obtain a feature map W1,W2,...,WnThen let B1=W1,B1Extracting by convolution characteristic and W2Feature fusion to obtain B2A 1 to B2Extracting by convolution characteristic and W3Feature fusion to obtain B3By analogy, Bn-1Extracting by convolution characteristic and WnFeature fusion to obtain BnThe module is corrected by using a LeakRelu function, and the formula is as follows:
Figure BDA0003151908620000032
where conv denotes a convolution operation, and finally makes the feature map A1,A2,...,AnAnd a characteristic diagram B1,B2,...,BnCorrespondingly performing feature fusion, namely adding operation, so that each dimension information of the image features is increased, effective feature information among the learning slices is ensured, and then the data after the adding operation are spliced together according to the slice dimensions, wherein the formula is as follows:
H=cat[(A1+Bn):(A2+Bn-2):...:(An+B1)]
wherein cat [: represents the splicing operation.
The characteristics of the image domain data after passing through the three bidirectional characteristic fusion modules are respectively H0、H1、H2Then adding H0、H1、H2The feature information of different modules is fused together by 1 multiplied by 1 convolution according to the channel dimension, so that the feature fusion of the image domain of multiple modules is realized, the network fuses different context information, the feature expression can be refined, and then long jump connection is carried out.
4) An image domain data consistency layer. The formula is as follows:
Figure BDA0003151908620000033
in the above formula, IDC represents the data consistency of the image domain, FHTwo-dimensional Fourier transform and two-dimensional inverse Fourier transform, respectively, fcnn(xu| θ) represents network reconstruction data, wuRepresenting the initial undersampled data, y-MK0M stands for the undersampled template (mask), K0And representing initial K space data, wherein delta is a weight parameter, j is a point on the K space, and omega is all under-sampled points of the K space.
5) Constructing a loss function
The model uses the mean square error, and the formula is as follows:
Figure BDA0003151908620000041
wherein x represents target data, y represents data reconstructed by the network, m and n represent the size of the image, i and j represent the indexes of pixels, and MSE represents the average of the square sum of the distances of the pixel points deviating from the true value.
The wavelet domain loss function is as follows:
Figure BDA0003151908620000042
the image domain loss function is as follows:
Figure BDA0003151908620000043
the overall loss function is shown below:
Loss=λ1loss12loss2
wherein f iscnnRepresenting the network model, theta representing a parameter of learning, lambda12And performing parameter optimization for the weight parameters through back propagation to make the loss function converged.
S4) setting the maximum iteration times, and enabling the network to be fitted with better characteristic information to reconstruct a high-quality MR image.
S5), model training, namely inputting a training set into a model, reconstructing a high-quality MR image through forward propagation, then performing backward propagation according to a loss function, and continuously updating network parameters until the stability is achieved.
Compared with the prior art, the invention has the following advantages:
in the magnetic resonance image reconstruction method, the combination of the wavelet domain and the image domain enables the network to learn more high-frequency information. The multi-module feature aggregation of the wavelet domain and the image domain can learn different semantic information, fit better data features, accelerate network convergence and reconstruct a high-quality MR image.
Drawings
FIG. 1 is a flowchart of a dual-domain recursive network MR reconstruction method based on multi-module feature aggregation
FIG. 2 is a diagram of an overall model of a dual-domain recursive network based on multi-module feature aggregation
FIG. 3 is a wavelet domain residual module and bidirectional feature fusion module of a dual-domain recursive network based on multi-module feature aggregation
Detailed Description
The invention is described in detail below with reference to the attached drawing
The invention comprises four aspects: the method comprises the steps of training data preparation, wavelet domain multi-module feature aggregation network construction, image domain multi-module feature aggregation network construction, and network model training and reconstruction.
S1) preparation of training data includes raw K-space data, undersampled templates, etc. K (K) for raw K space datax,ky) Denotes kxRepresenting the K spatial frequency encoding direction, KyExpressing the encoding direction of the K space phase, dividing the original K space data into K space data with the length of s and the size of 256 multiplied by 256, and simulating an undersampling process through an undersampling template (mask), wherein the formula is as follows:
Ku(kx,ky)=K(kx,ky)×mask(kx,ky)
the invention adopts a Cartesian (Cartesian) sampling track, the value of a required sampling point is 1, the value of an unnecessary sampling point is 0, and the formula is as follows:
Figure BDA0003151908620000051
obtaining image domain data by two-dimensional inverse Fourier transform of the undersampled K space, and obtaining wavelet domain data w by two-dimensional wavelet transform by adopting Min-Max normalization processing0=(w1,w2,...,wi)=DWT(xu) And i is the number of subbands (in this experiment, i is 4, w)1,w2,w3,w4Respectively representing low frequency component, horizontal low frequency vertical high frequency component, and horizontal high frequencyVertical low frequency component and diagonal high frequency component), w0There are 4 channels, with sizes 10 × 4 × 128 × 128.
S2) composition of the wavelet domain multi-module feature aggregation network: as shown in fig. 3, the network architecture includes three repeated wavelet domain residual blocks, each basic wavelet domain residual block is composed of a 3 × 3 convolution and three parallel hole convolutions, the convolution kernel is 3 × 3, two 1 × 1 convolutions are formed, the expansion rates of the three hole convolutions are respectively 1, 2, 4, and 3 × 3. Firstly, conducting 3 multiplied by 3 convolution on undersampled wavelet data, extracting shallow layer characteristics, then conducting convolution with three different expansion rates, fusing information with different scales by using 1 multiplied by 1 convolution, and finally conducting residual connection, wherein the output of three wavelet domain residual blocks are respectively F0,F1,F2Will F0,F1,F2Splicing the data into a whole according to channel dimensions, fusing the feature information of different modules together by using 1 × 1 convolution, realizing the feature fusion of a multi-module wavelet domain, fusing different context information of a network, realizing refined feature expression, finally performing cross-layer jump connection, converting the wavelet domain of network reconstruction into image domain data through wavelet inverse transformation, and then performing Fourier inverse transformation to realize the conversion from the image domain to a K space domain, wherein the undersampled K space data contains accurate information of the network reconstruction K space domain, so that the corresponding position of the network reconstruction K space is replaced by the undersampled K space, and the formula is as follows:
Figure BDA0003151908620000052
wherein WDC represents wavelet domain data consistency, WHRespectively representing a two-dimensional wavelet transform and an inverse two-dimensional wavelet transform, fcnn(wu| θ) represents network reconstruction data, wuDenotes the initial undersampled data, F represents the two-dimensional Fourier transform, y MK0M stands for the undersampled template (mask), K0Representing initial K-space data, δ is a weight parameter, j is a point on K-space, and Ω is all under-sampled points in K-space.
S3) image domain multi-module feature aggregation network: the network architecture includes three Bidirectional Feature Fusion (BFF) modules. The output of the wavelet domain reconstruction network is converted into image domain data through wavelet inverse transformation and used as the input of the image domain reconstruction network, each basic module is used for performing 3 x 3 convolution operation on each slice of the input data, performing shallow layer feature extraction, and then performing 3 x 3 convolution operation once again to respectively obtain a feature map M1,M2,...,MnThen let A1=M1,A1Extracting by 3 x 3 convolution characteristic and M2Feature fusion to obtain A2A is2Extracting by 3 x 3 convolution characteristic and M3Feature fusion to obtain A3By analogy, An-1Extracting by 3 x 3 convolution characteristic and MnFeature fusion to obtain AnThe module uses a LeakRelu function to correct, and the formula is as follows:
Figure BDA0003151908620000061
then, the slice dimension is reversely operated, each slice feature in the reverse direction of the slice dimension is convoluted by 3 multiplied by 3 to respectively obtain a feature map W1,W2,...,WnThen let B1=W1,B1Extracting by convolution characteristic and W2Feature fusion to obtain B2A 1 to B2Extracting by convolution characteristic and W3Feature fusion to obtain B3By analogy, Bn-1Extracting by convolution characteristic and WnFeature fusion to obtain BnThe module is corrected by using a LeakRelu function, and the formula is as follows:
Figure BDA0003151908620000062
where conv denotes a convolution operation, and finally makes the feature map A1,A2,...,AnAnd a characteristic diagram B1,B2,...,BnCorrespondingly performing feature fusion, namely adding operation, so that each dimension information of the image features is increased, effective feature information among the learning slices is ensured, and then the data after the adding operation are spliced together according to the slice dimensions, wherein the formula is as follows:
H=cat[(A1+Bn):(A2+Bn-2):...:(An+B1)]
wherein cat [: represents splicing operation;
the characteristics of the image domain data after passing through the three bidirectional characteristic fusion modules are respectively H0、H1、H2Then adding H0、H1、H2The image domain feature fusion method based on the multi-module image domain feature fusion comprises the steps of splicing the image domain feature information together according to channel dimensions, then fusing feature information of different modules together by using 1 x 1 convolution, achieving feature fusion of image domains of multiple modules, enabling a network to fuse different context information, enabling feature expression to be refined, then carrying out long-jump connection, and then carrying out image domain data consistency. The formula is as follows:
Figure BDA0003151908620000071
in the above formula, IDC represents the data consistency of the image domain, FHTwo-dimensional Fourier transform and two-dimensional inverse Fourier transform, respectively, fcnn(xu| θ) represents network reconstruction data, xuRepresenting the initial undersampled data, y-MK0M stands for the undersampled template (mask), K0And representing initial K space data, wherein delta is a weight parameter, j is a point on the K space, and omega is all under-sampled points of the K space.
S4) training and reconstructing a network model:
the recursive network model combining wavelet domain and image domain is shown in fig. 2, a training set is put into the network model, reconstructed wavelet domain data and image domain data are obtained through forward propagation, backward propagation is carried out according to the errors of the high-quality wavelet domain data and image domain data and the original wavelet domain data and image data, network parameters are continuously updated until the errors are stable, model parameters are stored,the Adam optimizer is used for training the whole network for the experiment, and the initial learning rate is set to be 2 multiplied by 10-4Due to the apparent memory burden, a sequence with the length s being 10 is randomly intercepted from the sequence for training, the training round epoch being 50, and a LeakRelu function is adopted for activation, wherein alpha is set to be 0.1. The number of iterations is set to nc3 (because of insufficient video memory, training times and iteration times can be increased in the later period to achieve a better reconstruction result), the experiment adopts 4 times of Cartesian sampling track, an undersampled test data set is input, a trained model is loaded, and a high-quality picture is reconstructed, compared with the recent technology, the experiment adopts peak signal-to-noise ratio (PSNR) and Structural Similarity (SSIM) to evaluate the reconstruction quality, and the following table is shown: PSNR and SSIM are both higher than other methods.
Figure BDA0003151908620000072
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited thereto, and the specific structure thereof is allowed to vary, and the skilled person can make variations or modifications based on the reading of the present invention, and the variations or modifications also fall within the scope of the present invention defined by the claims. In general, all changes which come within the scope of the invention as defined by the independent claims are intended to be embraced therein.

Claims (7)

1. A dual-domain recursive network MR reconstruction method based on multi-module feature fusion is characterized by comprising the following steps:
s1) preprocessing data;
s2) constructing a network model;
s3) training a neural network;
s4) testing the network model;
s5) evaluating the quality of the reconstructed image.
2. The method of claim 1, wherein in S1), K space data dimension is [ number of slices, h, w, c ], four-dimensional data including a plurality of slices, c represents data channel number, h, w represent height and width, respectively, sequences with length S-10 are randomly cut out from the sequences, i.e., [ S, h, w, c ], not only reducing display and memory pressure, but also playing a data enhancement effect in the training process, then an undersampled template (mask) with the same dimension as the K space data is randomly generated, undersampled templates between the sequences are different, then an undersampled process is simulated to obtain undersampled K space data, the data is two-dimensionally Fourier transformed into single-channel image data, and then the image data is normalized, the formula is as follows:
Figure FDA0003151908610000011
and then converting the data into wavelet domain data through two-dimensional wavelet transformation.
3. The method for reconstructing a dual-domain recursive network based on multi-module feature fusion as claimed in claim 1, wherein in S2), the wavelet domain multi-module feature aggregation network structure comprises three repeated wavelet domain residual modules, each basic wavelet domain residual module is composed of a 3 × 3 convolution, three parallel hole convolutions, a convolution kernel is 3 × 3, and two 1 × 1 convolutions, the expansion rate of the three hole convolutions is respectively 1, 2, 4, and the convolution kernel is 3 × 3. Firstly, conducting 3 multiplied by 3 convolution on undersampled wavelet data, extracting shallow layer characteristics, then conducting convolution with three different expansion rates, fusing information with different scales by using 1 multiplied by 1 convolution, and finally conducting residual connection, so that the output characteristics of three wavelet domain residual block are F0,F1,F2Will F0,F1,F2Splicing the data into a whole according to the channel dimension, fusing the feature information of different modules together by using 1 multiplied by 1 convolution, realizing the feature fusion of a multi-module wavelet domain, fusing different context information by a network, realizing refined feature expression, then carrying out cross-layer jump connection, finally carrying out data consistency, and replacing the sampled K space with the reconstructed K space of the networkThe corresponding position of (a).
4. The method according to claim 1, wherein in S2), the image domain multi-module Feature Fusion network architecture includes three Bidirectional Feature Fusion (BFF) modules for learning Feature information of adjacent slices, the output of the wavelet domain reconstruction network is transformed into image domain data through inverse wavelet transform, and the image domain reconstruction network input is obtained by performing 3 x 3 convolution operation on each slice of the input data, performing shallow Feature extraction, and performing 3 x 3 convolution operation again to obtain the Feature map M1,M2,...,MnThen let A1=M1,A1Extracting by 3 x 3 convolution characteristic and M2Feature fusion to obtain A2A is2Extracting by 3 x 3 convolution characteristic and M3Feature fusion to obtain A3By analogy, An-1Extracting by 3 x 3 convolution characteristic and MnFeature fusion to obtain AnThe module uses a LeakRelu function to correct;
then, the slice dimension is reversely operated, each slice feature in the reverse direction of the slice dimension is convoluted by 3 multiplied by 3 to respectively obtain a feature map W1,W2,...,WnThen let B1=W1,B1Extracting by convolution characteristic and W2Feature fusion to obtain B2A 1 to B2Extracting by convolution characteristic and W3Feature fusion to obtain B3By analogy, Bn-1Extracting by convolution characteristic and WnFeature fusion to obtain BnThe module uses LeakRelu function to correct;
finally, make the feature map A1,A2,...,AnAnd a characteristic diagram B1,B2,...,BnCorrespondingly performing feature fusion, namely adding operation, so that each dimension information of the image features is increased, effective feature information among the learning slices is ensured, and then the data after the adding operation are spliced together according to the slice dimensions, namely a formulaThe following were used:
H=cat[(A1+Bn):(A2+Bn-2):...:(An+B1)]
wherein cat [: represents splicing operation;
the feature maps of the image domain data after passing through the three bidirectional feature fusion modules are respectively H0、H1、H2Then adding H0、H1、H2Splicing the image frames together according to the channel dimensions, then fusing the feature information of different modules together by using 1 x 1 convolution to realize the feature fusion of image domains of multiple modules, fusing different context information by a network, refining feature expression, then performing cross-layer jump connection, finally performing data consistency, and replacing the sampled K space with the corresponding position of the network reconstruction K space.
5. The method for reconstructing the MR image based on the multi-module feature fusion of the claim 1, wherein in S3), the wave domain data, the undersampled K-space data and the mask input model of the training set are trained, the original data is used as GT, and the loss function is trained in the wavelet domain output and the image domain threshold output respectively.
6. The method for reconstructing the dual-domain recursive network MR based on multi-module feature fusion as claimed in claim 1, wherein in S4), the wavelet domain data, the undersampled K-space data and the mask of the test set are inputted into the trained network to obtain the magnetic resonance image after network reconstruction.
7. The method of claim 1, wherein in S5), PSNR and SSIM are used to evaluate the quality of the reconstructed MR image.
CN202110766581.XA 2021-07-07 2021-07-07 Dual-domain recursive network MR reconstruction method based on multi-module feature aggregation Pending CN113487507A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110766581.XA CN113487507A (en) 2021-07-07 2021-07-07 Dual-domain recursive network MR reconstruction method based on multi-module feature aggregation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110766581.XA CN113487507A (en) 2021-07-07 2021-07-07 Dual-domain recursive network MR reconstruction method based on multi-module feature aggregation

Publications (1)

Publication Number Publication Date
CN113487507A true CN113487507A (en) 2021-10-08

Family

ID=77941558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110766581.XA Pending CN113487507A (en) 2021-07-07 2021-07-07 Dual-domain recursive network MR reconstruction method based on multi-module feature aggregation

Country Status (1)

Country Link
CN (1) CN113487507A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117218453A (en) * 2023-11-06 2023-12-12 中国科学院大学 Incomplete multi-mode medical image learning method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106934778A (en) * 2017-03-10 2017-07-07 北京工业大学 A kind of MR image rebuilding methods based on small echo domain structure and non local grouping sparsity
CN111028306A (en) * 2019-11-06 2020-04-17 杭州电子科技大学 AR2U-Net neural network-based rapid magnetic resonance imaging method
CN111784792A (en) * 2020-06-30 2020-10-16 四川大学 Rapid magnetic resonance reconstruction system based on double-domain convolution neural network and training method and application thereof
CN111951344A (en) * 2020-08-09 2020-11-17 昆明理工大学 Magnetic resonance image reconstruction method based on cascade parallel convolution network
CN112164122A (en) * 2020-10-30 2021-01-01 哈尔滨理工大学 Rapid CS-MRI reconstruction method for generating countermeasure network based on depth residual error
CN112946545A (en) * 2021-01-28 2021-06-11 杭州电子科技大学 PCU-Net network-based fast multi-channel magnetic resonance imaging method
CN113077527A (en) * 2021-03-16 2021-07-06 天津大学 Rapid magnetic resonance image reconstruction method based on undersampling

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106934778A (en) * 2017-03-10 2017-07-07 北京工业大学 A kind of MR image rebuilding methods based on small echo domain structure and non local grouping sparsity
CN111028306A (en) * 2019-11-06 2020-04-17 杭州电子科技大学 AR2U-Net neural network-based rapid magnetic resonance imaging method
CN111784792A (en) * 2020-06-30 2020-10-16 四川大学 Rapid magnetic resonance reconstruction system based on double-domain convolution neural network and training method and application thereof
CN111951344A (en) * 2020-08-09 2020-11-17 昆明理工大学 Magnetic resonance image reconstruction method based on cascade parallel convolution network
CN112164122A (en) * 2020-10-30 2021-01-01 哈尔滨理工大学 Rapid CS-MRI reconstruction method for generating countermeasure network based on depth residual error
CN112946545A (en) * 2021-01-28 2021-06-11 杭州电子科技大学 PCU-Net network-based fast multi-channel magnetic resonance imaging method
CN113077527A (en) * 2021-03-16 2021-07-06 天津大学 Rapid magnetic resonance image reconstruction method based on undersampling

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
ALI GHOLIPOUR等: "Super-resolution reconstruction in frequency, image, and wavelet domains to reduce through-plane partial voluming in MRI", 《MED. PHYS》, pages 6919 - 6932 *
ZHILUN WANG等: "IKWI-net: A cross-domain convolutional neural network for undersampled magnetic resonance image reconstruction", 《MAGNETIC RESONANCE IMAGING》, pages 1 - 10 *
吕俊杰: "基于深度学习的磁共振图像去噪和重建", 《中国优秀硕士学位论文全文数据库医药卫生科技辑》, pages 060 - 208 *
吴浩博: "基于生成对抗网络的图像超分辨重建算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, pages 138 - 1079 *
杨兵等: "基于深度特征聚合网络的医学图像分割", 《计算机工程》, pages 1 - 12 *
杨兵等: "融合组织特性的磁共振图像自动分割", 《浙江省医学会医学工程学术大会》, pages 2 - 8 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117218453A (en) * 2023-11-06 2023-12-12 中国科学院大学 Incomplete multi-mode medical image learning method
CN117218453B (en) * 2023-11-06 2024-01-16 中国科学院大学 Incomplete multi-mode medical image learning method

Similar Documents

Publication Publication Date Title
Ghodrati et al. MR image reconstruction using deep learning: evaluation of network structure and loss functions
CN108717717B (en) Sparse MRI reconstruction method based on combination of convolutional neural network and iteration method
CN109214989B (en) Single image super resolution ratio reconstruction method based on Orientation Features prediction priori
CN113077527B (en) Rapid magnetic resonance image reconstruction method based on undersampling
US11170543B2 (en) MRI image reconstruction from undersampled data using adversarially trained generative neural network
CN111951344B (en) Magnetic resonance image reconstruction method based on cascade parallel convolution network
Luo et al. Bayesian MRI reconstruction with joint uncertainty estimation using diffusion models
CN113379867B (en) Nuclear magnetic resonance image reconstruction method based on joint optimization sampling matrix
CN103142228A (en) Compressed sensing magnetic resonance fast imaging method
CN109712077B (en) Depth dictionary learning-based HARDI (hybrid automatic repeat-based) compressed sensing super-resolution reconstruction method
CN112991483B (en) Non-local low-rank constraint self-calibration parallel magnetic resonance imaging reconstruction method
CN105931242B (en) Dynamic nuclear magnetic resonance (DNMR) image rebuilding method based on dictionary learning and time gradient
CN111784792A (en) Rapid magnetic resonance reconstruction system based on double-domain convolution neural network and training method and application thereof
CN109920017B (en) Parallel magnetic resonance imaging reconstruction method of joint total variation Lp pseudo norm based on self-consistency of feature vector
Wang et al. MHAN: Multi-Stage Hybrid Attention Network for MRI reconstruction and super-resolution
CN110942496A (en) Propeller sampling and neural network-based magnetic resonance image reconstruction method and system
CN113487507A (en) Dual-domain recursive network MR reconstruction method based on multi-module feature aggregation
CN110148193A (en) Dynamic magnetic resonance method for parallel reconstruction based on adaptive quadrature dictionary learning
CN105931184B (en) SAR image super-resolution method based on combined optimization
CN113920211B (en) Quick magnetic sensitivity weighted imaging method based on deep learning
CN110598579A (en) Hypercomplex number magnetic resonance spectrum reconstruction method based on deep learning
CN115471580A (en) Physical intelligent high-definition magnetic resonance diffusion imaging method
Pan et al. Iterative self-consistent parallel magnetic resonance imaging reconstruction based on nonlocal low-rank regularization
CN113674379A (en) MRI reconstruction method, system and computer readable storage medium of common sparse analysis model based on reference support set
Shangguan et al. Multi-slice compressed sensing MRI reconstruction based on deep fusion connection network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination