CN112734869B - Rapid magnetic resonance imaging method based on sparse complex U-shaped network - Google Patents
Rapid magnetic resonance imaging method based on sparse complex U-shaped network Download PDFInfo
- Publication number
- CN112734869B CN112734869B CN202011476413.9A CN202011476413A CN112734869B CN 112734869 B CN112734869 B CN 112734869B CN 202011476413 A CN202011476413 A CN 202011476413A CN 112734869 B CN112734869 B CN 112734869B
- Authority
- CN
- China
- Prior art keywords
- complex
- data
- network
- sparse
- acq
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000002595 magnetic resonance imaging Methods 0.000 title claims abstract description 35
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000012549 training Methods 0.000 claims abstract description 32
- 230000009466 transformation Effects 0.000 claims abstract description 18
- 230000001629 suppression Effects 0.000 claims abstract description 10
- 238000001914 filtration Methods 0.000 claims abstract description 9
- 238000010276 construction Methods 0.000 claims abstract description 6
- 230000004913 activation Effects 0.000 claims description 16
- 238000013527 convolutional neural network Methods 0.000 claims description 12
- 238000010606 normalization Methods 0.000 claims description 10
- 238000011176 pooling Methods 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 8
- 238000005070 sampling Methods 0.000 claims description 8
- 238000005457 optimization Methods 0.000 claims description 5
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 description 18
- 230000006870 function Effects 0.000 description 8
- 238000013528 artificial neural network Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 5
- 238000010587 phase diagram Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
The invention discloses a rapid magnetic resonance imaging method based on a sparse complex U-shaped network, which comprises 8 steps of data acquisition, analog undersampling, zero filling reconstruction, complex differential transformation, SCUNet network construction, SCUNet network training, complex sparse data artifact suppression and inverse filtering reconstruction. The method is a fast complex magnetic resonance imaging method with great application potential, and the method firstly provides that the complex image data in the image space is subjected to sparse transformation by utilizing complex differential transformation, the sparse complex data is trained, and complex magnetic resonance images are reconstructed based on an inverse filtering algorithm, so that the training network SCUNet can more effectively extract the characteristics of the complex data, the quality of the final magnetic resonance undersampled complex reconstructed images is improved, the amplitude information and the phase information of the magnetic resonance undersampled images are reserved.
Description
Technical Field
The invention belongs to the field of magnetic resonance imaging, and relates to a rapid magnetic resonance imaging method based on a sparse complex U-shaped network.
Background
Magnetic resonance imaging (Magnetic Resonance Imaging, MRI) has been widely used in clinic due to its advantages of higher spatial resolution, soft tissue contrast, no radiation damage, etc., but its application is limited by its expensive equipment price and slower imaging time. How to improve the imaging speed of MRI while ensuring the image quality has been one of the hot spots in the MRI research field.
In recent years, with the improvement of the hardware performance of a computer, particularly the rapid development of the computing capability of GPU (Graphic Processing Unit), researchers start to apply a deep learning method based on a convolutional neural network to the field of rapid magnetic resonance imaging. However, most of the current rapid MRI imaging methods based on convolutional neural networks are based on the unfolding training of amplitude images (i.e. real data), and the reconstructed images cannot acquire phase information. The k-space data acquired by MRI is complex, and is obtained by classical discrete Fourier reconstruction, so that the data has amplitude information and phase information. The phase information plays an important role in various applications such as magnetically sensitive weighted imaging, so that the rapid MRI imaging method based on complex convolution neural network and complex training has wider application occasions.
The patents applied at present on the aspect of fast magnetic resonance imaging based on complex convolution neural network are as follows:
A magnetic resonance imaging method based on a deep convolutional neural network (application number: CN 201811388227.2) proposes training in k-space through the deep convolutional neural network, dividing complex values into a real part and an imaginary part, and respectively training as real images. A head and neck joint imaging method and a head and neck joint imaging device based on depth priori learning (application number: CN 201811525187.1) provide a head and neck joint imaging method based on complex residual neural network and priori learning, and solve the problem that the imaging precision and the imaging time requirements cannot be ensured at the same time in the existing head and neck joint imaging. A rapid magnetic resonance imaging method based on a complex R2U-Net network (application number: 2019122800376560) provides a complex module calculation, complex loss value and complex training based recursive residual U-shaped convolutional neural network, which can be used for complex MRI imaging.
The articles published at home and abroad on the aspect of fast magnetic resonance imaging based on complex convolutional neural network deep learning are as follows: in 2018, lee D et al proposed to train the amplitude and phase of the magnetic resonance data respectively for (Lee D,Yoo J,Tak S,et al.Deep residual learning for accelerated MRI using magnitude and phase networks[J].IEEE Transactions on Biomedical Engineering,2018,65(9):1985-1995.).2018 years by using a residual network, HAMMERNIK K proposed to divide the complex magnetic resonance data into a real part and an imaginary part, and train the network as real data respectively for (Hammernik K,Klatzer T,Kobler E,Recht MP,Sodickson DK,Pock T,et al.Learning a variational network for reconstruction of accelerated MRI data.Magn Reson Med2018;79(6):3055-71.).2018 years, DEDMARI M A et al proposed a magnetic resonance image reconstruction method (Dedmari M A,Conjeti S,Estrada S,et al.Complex fully convolutional neural networks for MR image reconstruction[C].International workshop on machine learning for medical image reconstruction.springer,cham,2018:30-38.), based on a complex full convolution neural network, which reconstructs the complex magnetic resonance data of the knee part by using a dense full convolution neural network, so that not only the amplitude image but also the phase image can be reconstructed. 2019 Wang S et al proposed a multi-channel image reconstruction method, using residual complex convolution neural network to accelerate parallel MRI imaging (Wang S,Cheng H,Ying L,Xiao T,Ke Z,Zheng H,Liang D.DeepcomplexMRI:Exploiting deep residual network for fast parallel MR imaging with complex convolution,Magnetc Resonance Imaging,2020:68:136-47.).
The training data of the articles or patents based on the complex convolutional neural network are either k-space data or conventional image space data, and no patent or article of the rapid magnetic resonance imaging method based on the neural network training aspect of sparse complex data appears.
Disclosure of Invention
Aiming at the defects of the existing complex convolutional neural network in a magnetic resonance rapid imaging method, the invention provides a rapid magnetic resonance imaging method based on a sparse complex U-shaped network, firstly provides a rapid magnetic resonance imaging method which uses complex differential transformation to carry out sparse transformation on complex image data in an image space, takes the sparse complex data as training data, and takes the sparse complex U-shaped network (SCUNet) as the training network, wherein the output sparse complex data is required to be reconstructed through inverse filtering. The invention is helpful for extracting the characteristics of complex data more effectively, reducing the difference of training data caused by background tissue contrast, expanding the range of the training data and improving the complex image reconstruction quality of the magnetic resonance undersampled data.
A rapid magnetic resonance imaging method based on a sparse complex U-shaped network specifically comprises the following steps:
step 1: data acquisition
A plurality of sets of fully sampled k-space data are acquired for training, each set being denoted by S ref(xk,yk), wherein x k represents the position of k-space frequency code Frequency Encoding in the direction FE, y k represents the position in the direction PE, phase Encoding, and reference fully sampled image I ref (x, y) is obtained by inverse discrete fourier transform IDFT:
Iref(x,y)=IDFT(Sref(xk,yk)) (1)
the actual k-space undersampled data for the fast MRI application is denoted by S acq(xk,yk), S acq(xk,yk) includes k-space center portion data S cacq(xk,yk).
Preferably, the number of k-space data of full sampling is 500 or more;
Step 2: analog undersampling
The k space full sampling data S ref(xk,yk) acquired in the step 1) is subjected to analog undersampling, and undersampling mode selection rules are undersampled, namely, data of one row are acquired every N rows in the PE direction of k space, wherein N is an integer greater than 1; data acquired in the central region of the k-space PE direction are represented by S c(xk,yk); denote all k-space data acquired undersampled with S u(xk,yk), S u(xk,yk) includes S c(xk,yk):
Su(xk,yk)=Sref(xk,yk).*mask(xk,yk) (2)
Where, represents dot product, mask (x k,yk) is an undersampled template matrix:
Step 3: zero-filling reconstruction
For undersampled data S u(xk,yk) and S acq(xk,yk), inverse discrete fourier transforms are performed, respectively, to obtain zero-filled reconstructed images, denoted by I u (x, y) and I acq (x, y), respectively:
Iu(x,y)=IDFT(Su(xk,yk)) (4)
Iacq(x,y)=IDFT(Sacq(xk,yk)) (5)
Step 4: complex differential transformation
Complex differential transformation is performed on I ref(x,y)、Iu (x, y) and I acq (x, y), respectively:
Eref(x,y)=Iref(x,y+1)-Iref(x,y) (6)
Eu(x,y)=Iu(x,y+1)-Iu(x,y) (7)
Eacq(x,y)=Iacq(x,y+1)-Iacq(x,y) (8)
E ref(x,y)、Eu (x, y) and E acq (x, y) respectively represent sparse complex data obtained by complex differential transformation of I ref(x,y)、Iu (x, y) and I acq (x, y), and each item in the formula (6-8) is complex.
E ref (x, y) and E u (x, y) are in one-to-one correspondence to generate sparse complex training data.
Step 5: SCUNet network construction
SCUNet the network is a complex expansion of the U-shaped convolutional neural network, and the processed input data are sparse complex data. The network includes two parts, complex downsampling and complex upsampling. The complex downsampling layer comprises complex convolution, complex batch normalization, complex activation and complex pooling; the complex upsampling layer includes complex upsampling, complex data merging, complex convolution, complex batch normalization, and complex activation. SCUNet network construction comprises the following six steps: .
Step 5-1: complex convolution
The complex convolution formula is as follows:
Wherein, represents convolution operation; j represents a complex number; the convolution kernel is K Cn=KR+jKI, where Is a complex matrix, and K R and K I are real matrices; the input feature of each layer is C n-1 =a+jb, where a and b are real matrices; c n is the n-th layer output after convolution, and when n=1, C n-1=C0=Eu (x, y), i.e. the input of the complex convolution is the sparse complex data E u (x, y) after complex differential transformation of the zero-padding reconstructed image I u (x, y).
Step 5-2: multiple batch normalization
The complex batch normalization formula is as follows:
wherein CBN represents a plurality of batches of standardized operations, Calculating an intermediate value; CBN out is a plurality of batch normalized outputs; v is the covariance matrix, V RI=VIR; the shift parameter beta is a complex number; gamma is the scaling parameter matrix. Cov represents the calculated covariance, R { C n } represents the real part of C n, and I { C n } represents the imaginary part of C n.
Preferably, V RI and V IR are initialized to 0, and V RR and V II are initializedBoth the real part R { beta } and the imaginary part I { beta } of the shift parameter beta are initialized to 0; gamma RR and gamma II initialisation/>Gamma RI is initialized to 0.
Step 5-3: plural activation
The complex activation formula is as follows:
wherein CReLU represents a complex activation function; reLU stands for real activation function; θ CBN is the phase of CBN out; Is a learnable parameter; CReLU out is the output of the complex activation function, e represents the natural constant.
Step 5-4: plural pooling
And (3) adopting a complex maximum value pooling method, solving the complex number with the maximum complex data modulus value in the window according to the size of the sliding window as complex pooling output, and completing complex downsampling.
Step 5-5: complex data merging
And merging corresponding data of the same layer in a plurality of downsampling layers and a plurality of upsampling layers in the SCUNet network.
Step 5-6: complex upsampling
Complex up-sampling is carried out by adopting a complex nearest neighbor interpolation method, and the complex with the largest amplitude is selected as the value of all pixels in the neighborhood after interpolation.
Step 6: SCUNet network training optimization:
And (3) training and optimizing the CUNet network constructed in the step (5) by using the sparse complex training data generated in the step (4) and calculating based on an Adam algorithm and a complex loss function, and when the complex loss function is minimum, saving parameters of the SCUNet network as optimized network parameters theta.
The complex loss function is calculated by the following formula:
where T represents the batch size, T represents the T-th image in the batch, and t=1, 2 … T. SCUNet represents the SCUNet network constructed in step 5. Representing squaring the two norms.
Step 7: complex sparse data artifact suppression
Predicting the actually acquired undersampled data zero-filling reconstruction map E acq (x, y) by using a SCUNet network after parameter optimization to obtain complex sparse data E acqout (x, y) after artifact suppression:
Eacqout(x,y)=SCUNet(Eacq(x,y),θ) (17)
step 8: inverse filter reconstruction
The artifact suppressed result E acqout (x, y) is subjected to discrete fourier transform DFT to obtain k-space data, the value of the point in k-space corresponding to the point in which data acquisition is actually performed is replaced by the actually acquired data S acq(xk,yk), k-space data S acqout is obtained, and the final complex image I recon is reconstructed based on the inverse filtering of the formula (18).
Where IDFT represents the inverse discrete fourier transform, k y represents the k-space position along the PE direction, N y represents the data dimension in the PE direction.
The invention has the following beneficial effects:
(1) According to the method, a large amount of sparse complex data are trained, so that artifacts caused by undersampling can be effectively removed, an amplitude image can be reconstructed, a phase image can be reconstructed, and the requirements of a plurality of fast magnetic resonance imaging fields can be met.
(2) According to the invention, the sparse complex data is trained, and compared with the training of a conventional complex image, effective complex data features are easier to extract;
(3) The contrast ratio between the sparse complex data is reduced by complex differential transformation, so that the magnetic resonance images with different acquisition parameters can be trained together, the total amount of training data is increased, and the overfitting is prevented;
(4) The invention can fully collect the data of the central part of k space, not only can preserve the contrast information of the image data, but also can avoid the situation that the denominator is zero during the inverse filtering reconstruction.
(5) The SCUNet network has longer training time, the reconstruction time can reach the second level, and once the network is trained, the imaging time can reach the real-time online requirement after code optimization.
Drawings
FIG. 1 is a flow chart of the reconstruction of data acquisition according to the present invention;
FIG. 2 is a schematic diagram of the data undersampling of the present invention;
Fig. 3 (a) - (j) are complex image reconstruction results in the embodiment.
Detailed Description
The invention is further explained below with reference to the drawings;
The display card used in this embodiment is Tesla K80 NVIDIA Geforce RTX2070 GPU, 16GB RAM, and 2.21GHz CPU.
As shown in fig. 1, the present invention comprises 8 steps: data acquisition, simulated undersampling, zero filling reconstruction, complex differential transformation, SCUNet network construction, SCUNet network training, complex sparse data artifact suppression and inverse filtering reconstruction. The method comprises the following steps:
step 1: data acquisition
1200 Sets of fully sampled magnetic resonance brain k-space data are acquired for training, each data size being 256 x 256, each set being denoted S ref(xk,yk), wherein x k denotes the position in the direction of k-space frequency encoding FE, y k denotes the position in the direction of phase encoding PE, and a reference fully sampled image I ref (x, y) is obtained by inverse discrete fourier transform IDFT.
The actual k-space undersampled data for the fast MRI application is denoted by S acq(xk,yk).
Step 2: analog undersampling
As shown in fig. 2, the k-space fully sampled data S ref(xk,yk) is subjected to analog undersampling, and the undersampling mode selection rule undersamples, that is, a row of data is acquired every 5 rows in the PE direction of k-space; full acquisition in the central region of the k-space PE direction, denoted by S c(xk,yk); s u(xk,yk) represents all k-space data acquired by undersampling;
Step 3: zero-filling reconstruction
For undersampled data S u(xk,yk) and S acq(xk,yk), inverse discrete fourier transforms are performed, respectively, to obtain zero-filled reconstructed images, denoted by I u (x, y) and I acq (x, y), respectively.
Step 4: complex differential transformation
Complex differential transformation is carried out on the I ref(x,y)、Iu (x, y) and the I acq (x, y) respectively, and E ref(x,y)、Eu (x, y) and E acq(x,y).Eref (x, y) and E u (x, y) are obtained in one-to-one correspondence and are used as 1200 pairs of sparse complex training data pairs.
Step 5: SCUNet network construction
The SCUNet network is a complex expansion of the U-shaped convolutional neural network and comprises two parts of complex downsampling and complex upsampling, each part has 5 layers, and each layer corresponds to each other two by two. The complex downsampling layer comprises complex convolution, complex batch normalization, complex activation and complex pooling; the complex upsampling layer includes complex upsampling, complex data merging, complex convolution, complex batch normalization, and complex activation.
Step 6: SCUNet network training:
training the SCUNet network based on the Adam algorithm and the calculation of the complex loss function Closs, and when the complex loss function is minimum, storing the network parameters of the SCUNet network at the moment as optimized network parameters theta. The training time for SCUNet networks was approximately 565 minutes.
Step 7: complex sparse data artifact suppression
And predicting the actually acquired undersampled data zero-filling reconstruction map E acq (x, y) by using a SCUNet network with optimized parameters to obtain complex sparse data E acqout (x, y) after artifact suppression.
Step 8: inverse filter reconstruction
The artifact suppressed result E acqout (x, y) is subjected to Discrete Fourier Transform (DFT) to obtain k-space data, the value of the point in k-space corresponding to the point in which data acquisition is actually performed is replaced by the actually acquired data S acq(xk,yk), k-space data S acqout is obtained, and the final complex image I recon is reconstructed based on inverse filtering.
Fig. 3 (a) - (j) are imaging result diagrams of the present embodiment, fig. 3 (a) - (e) are amplitude images, and fig. 3 (f) - (j) are phase images; (a) (f) a full sampling amplitude map and a phase reference map, respectively; (b) (g) respectively obtaining an amplitude diagram and a phase diagram of complex sparse data after zero filling reconstruction and complex differential transformation of undersampled data; (c) (h) respectively obtaining an amplitude diagram and a phase diagram after complex sparse data artifact suppression based on SCUNet networks; (d) (i) respectively obtaining an amplitude diagram and a phase diagram after inverse filtering reconstruction; (e) (j) amplitude error map and phase error map, respectively. From fig. 3, it can be seen that the method of the present invention can effectively suppress the overlapping artifacts caused by undersampling, and the errors in both the amplitude error map (e) and the phase error map (j) are very small, and the objective quantization error TRE value of the reconstructed complex image is only 6.94e-4.
Claims (4)
1. A rapid magnetic resonance imaging method based on a sparse complex U-shaped network is characterized in that: the method specifically comprises the following steps:
step 1: data acquisition
A plurality of sets of fully sampled k-space data are acquired for training, each set being denoted by S ref(xk,yk), wherein x k represents the position of k-space frequency code Frequency Encoding in the direction FE, y k represents the position in the direction PE, phase Encoding, and reference fully sampled image I ref (x, y) is obtained by inverse discrete fourier transform IDFT:
Iref(x,y)=IDFT(Sref(xk,yk)) (1)
The actual k-space undersampled data for the fast MRI application is denoted by S acq(xk,yk), S acq(xk,yk) including k-space center portion data S cacq(xk,yk);
Step 2: analog undersampling
The k space full sampling data S ref(xk,yk) acquired in the step 1) is subjected to analog undersampling, and undersampling mode selection rules are undersampled, namely, data of one row are acquired every N rows in the PE direction of k space, wherein N is an integer greater than 1; data acquired in the central region of the k-space PE direction are represented by S c(xk,yk); denote all k-space data acquired undersampled with S u(xk,yk), S u(xk,yk) includes S c(xk,yk):
Su(xk,yk)=Sref(xk,yk).*mask(xk,yk) (2)
Where, represents dot product, mask (x k,yk) is an undersampled template matrix:
Step 3: zero-filling reconstruction
For undersampled data S u(xk,yk) and S acq(xk,yk), inverse discrete fourier transforms are performed, respectively, to obtain zero-filled reconstructed images, denoted by I u (x, y) and I acq (x, y), respectively:
Iu(x,y)=IDFT(Su(xk,yk)) (4)
Iacq(x,y)=IDFT(sacq(xk,yk)) (5)
Step 4: complex differential transformation
Complex differential transformation is performed on I ref(x,y)、Iu (x, y) and I acq (x, y), respectively:
Eref(x,y)=Iref(x,y+1)-Iref(x,y) (6)
Eu(x,y)=Iu(x,y+1)-Iu(x,y) (7)
Eacq(x,y)=Iacq(x,y+1)-Iacq(x,y) (8)
E ref(x,y)、Eu (x, y) and E acq (x, y) respectively represent sparse complex data obtained by complex differential transformation of I ref(x,y)、Iu (x, y) and I acq (x, y), and each item in the formula (6-8) is complex;
E ref (x, y) corresponds to E u (x, y) one by one, and sparse complex training data are generated;
Step 5: SCUNet network construction
SCUNet is complex expansion of the U-shaped convolutional neural network, and the processed input data are sparse complex data; the network comprises a complex downsampling part and a complex upsampling part; the complex downsampling layer comprises complex convolution, complex batch normalization, complex activation and complex pooling; the complex up-sampling layer comprises complex up-sampling, complex data merging, complex convolution, complex batch normalization and complex activation;
step 6: SCUNet network training optimization:
Training and optimizing the SCUNet network constructed in the step 5 based on the Adam algorithm and the calculation of the complex loss function by using the sparse complex training data generated in the step 4, and storing the parameters of the SCUNet network as optimized network parameters theta when the complex loss function is minimum;
the complex loss function is calculated by the following formula:
Where T represents the batch size, T represents the T-th image in the batch, t=1, 2 … T; SCUNet denotes the SCUNet network constructed in step 5; Representing squaring the binary norms;
Step 7: complex sparse data artifact suppression
Predicting the actually acquired undersampled data zero-filling reconstruction map E acq (x, y) by using a SCUNet network after parameter optimization to obtain complex sparse data E acqout (x, y) after artifact suppression:
Eacqout(x,y)=SCUNet(Eacq(x,y),θ) (10)
step 8: inverse filter reconstruction
The result E acqout (x, y) after artifact suppression is subjected to Discrete Fourier Transform (DFT) to obtain k-space data, the value of the point corresponding to the point in the k-space where data acquisition is actually performed is replaced by the actually acquired data S acq(xk,yk), k-space data S acqout is obtained, and a final complex image I recon is reconstructed based on the inverse filtering of the formula (11);
Where IDFT represents the inverse discrete fourier transform, k y represents the k-space position along the PE direction, N y represents the data dimension in the PE direction.
2. The sparse complex U-network based rapid magnetic resonance imaging method of claim 1, wherein: the number of k-space data fully sampled in step 1 is 500 or more.
3. The sparse complex U-network based rapid magnetic resonance imaging method of claim 1, wherein: the specific steps of constructing SCUNet networks in step 5 are as follows:
step 5-1: complex convolution
The complex convolution formula is as follows:
Wherein, represents convolution operation; j represents a complex number; the convolution kernel is Wherein/>Is a complex matrix, and K R and K I are real matrices; the input feature of each layer is C n-1 =a+jb, where a and b are real matrices; c n is the n-th layer output after convolution, and when n=1, C n-1=C0=Eu (x, y), that is, the input of complex convolution is sparse complex data E u (x, y) after complex differential transformation of the zero-padding reconstructed image I u (x, y);
Step 5-2: multiple batch normalization
The complex batch normalization formula is as follows:
wherein CBN represents a plurality of batches of standardized operations, Calculating an intermediate value; CBN out is a plurality of batch normalized outputs; v is the covariance matrix, V RI=VIR; the shift parameter beta is a complex number; gamma is the scaling parameter matrix; cov represents the calculated covariance, R { C n } represents the real part of C n, and I { C n } represents the imaginary part of C n;
Step 5-3: plural activation
The complex activation formula is as follows:
wherein CReLU represents a complex activation function; reLU stands for real activation function; θ CBN is the phase of CBN out; Is a learnable parameter; CReLU out is the output of the complex activation function, e represents the natural constant;
Step 5-4: plural pooling
Adopting a complex maximum value pooling method, solving the complex number with the maximum complex data modulus value in the window according to the size of the sliding window as complex pooling output, and completing complex downsampling;
step 5-5: complex data merging
Merging corresponding data of the same layer in a plurality of downsampling layers and a plurality of upsampling layers in SCUNet networks;
step 5-6: complex upsampling
Complex up-sampling is carried out by adopting a complex nearest neighbor interpolation method, and the complex with the largest amplitude is selected as the value of all pixels in the neighborhood after interpolation.
4. A method of rapid magnetic resonance imaging based on a sparse complex U-shaped network as claimed in claim 3, wherein: in step 5-2, V RI and V IR are initialized to 0, and V RR and V II are initializedBoth the real part R { beta } and the imaginary part I { beta } of the shift parameter beta are initialized to 0; gamma RR and gamma II initialisation/>Gamma RI is initialized to 0.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011476413.9A CN112734869B (en) | 2020-12-15 | 2020-12-15 | Rapid magnetic resonance imaging method based on sparse complex U-shaped network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011476413.9A CN112734869B (en) | 2020-12-15 | 2020-12-15 | Rapid magnetic resonance imaging method based on sparse complex U-shaped network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112734869A CN112734869A (en) | 2021-04-30 |
CN112734869B true CN112734869B (en) | 2024-04-26 |
Family
ID=75602156
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011476413.9A Active CN112734869B (en) | 2020-12-15 | 2020-12-15 | Rapid magnetic resonance imaging method based on sparse complex U-shaped network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112734869B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI792513B (en) * | 2021-08-20 | 2023-02-11 | 財團法人國家衛生研究院 | Method of removing motion-induced ghost artifacts in mri |
CN114010180B (en) * | 2021-11-05 | 2024-04-26 | 清华大学 | Magnetic resonance rapid imaging method and device based on convolutional neural network |
CN114581550B (en) * | 2021-12-31 | 2023-04-07 | 浙江大学 | Magnetic resonance imaging down-sampling and reconstruction method based on cross-domain network |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103472419A (en) * | 2013-08-30 | 2013-12-25 | 深圳先进技术研究院 | Magnetic-resonance fast imaging method and system thereof |
CN106526511A (en) * | 2016-10-21 | 2017-03-22 | 杭州电子科技大学 | SPEED magnetic resonance imaging method based on k space center ghost positioning |
CN109993809A (en) * | 2019-03-18 | 2019-07-09 | 杭州电子科技大学 | Rapid magnetic resonance imaging method based on residual error U-net convolutional neural networks |
CN110517241A (en) * | 2019-08-23 | 2019-11-29 | 吉林大学第一医院 | Method based on the full-automatic stomach fat quantitative analysis of NMR imaging IDEAL-IQ sequence |
CN111123183A (en) * | 2019-12-27 | 2020-05-08 | 杭州电子科技大学 | Rapid magnetic resonance imaging method based on complex R2U _ Net network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10885630B2 (en) * | 2018-03-01 | 2021-01-05 | Intuitive Surgical Operations, Inc | Systems and methods for segmentation of anatomical structures for image-guided surgery |
-
2020
- 2020-12-15 CN CN202011476413.9A patent/CN112734869B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103472419A (en) * | 2013-08-30 | 2013-12-25 | 深圳先进技术研究院 | Magnetic-resonance fast imaging method and system thereof |
CN106526511A (en) * | 2016-10-21 | 2017-03-22 | 杭州电子科技大学 | SPEED magnetic resonance imaging method based on k space center ghost positioning |
CN109993809A (en) * | 2019-03-18 | 2019-07-09 | 杭州电子科技大学 | Rapid magnetic resonance imaging method based on residual error U-net convolutional neural networks |
CN110517241A (en) * | 2019-08-23 | 2019-11-29 | 吉林大学第一医院 | Method based on the full-automatic stomach fat quantitative analysis of NMR imaging IDEAL-IQ sequence |
CN111123183A (en) * | 2019-12-27 | 2020-05-08 | 杭州电子科技大学 | Rapid magnetic resonance imaging method based on complex R2U _ Net network |
Non-Patent Citations (3)
Title |
---|
DEEP COMPLEX NETWORKS;Chiheb Tabelsi等;ICLR 2018;全文 * |
Deep residual learning for compressed sensing MRI;doogwook Lee et al;《2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)》;全文 * |
胡源.基于深度学习的快速磁共振成像技术研究.知网.2020,全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN112734869A (en) | 2021-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112734869B (en) | Rapid magnetic resonance imaging method based on sparse complex U-shaped network | |
Ghodrati et al. | MR image reconstruction using deep learning: evaluation of network structure and loss functions | |
CN109993809B (en) | Rapid magnetic resonance imaging method based on residual U-net convolutional neural network | |
Lee et al. | Deep residual learning for accelerated MRI using magnitude and phase networks | |
US11079456B2 (en) | Method of reconstructing magnetic resonance image data | |
Lee et al. | Deep artifact learning for compressed sensing and parallel MRI | |
Qu et al. | Magnetic resonance image reconstruction from undersampled measurements using a patch-based nonlocal operator | |
CN111123183B (en) | Rapid magnetic resonance imaging method based on complex R2U _ Net network | |
Zeng et al. | A comparative study of CNN-based super-resolution methods in MRI reconstruction and its beyond | |
Aghabiglou et al. | Projection-Based cascaded U-Net model for MR image reconstruction | |
Li et al. | High quality and fast compressed sensing MRI reconstruction via edge-enhanced dual discriminator generative adversarial network | |
Liu et al. | On the regularization of feature fusion and mapping for fast MR multi-contrast imaging via iterative networks | |
Aghabiglou et al. | MR image reconstruction using densely connected residual convolutional networks | |
CN112991483A (en) | Non-local low-rank constraint self-calibration parallel magnetic resonance imaging reconstruction method | |
Zhang et al. | Compressed sensing MR image reconstruction via a deep frequency-division network | |
Lv et al. | Parallel imaging with a combination of sensitivity encoding and generative adversarial networks | |
CN111754598A (en) | Local space neighborhood parallel magnetic resonance imaging reconstruction method based on transformation learning | |
Hou et al. | PNCS: Pixel-level non-local method based compressed sensing undersampled MRI image reconstruction | |
Luo et al. | Generative Image Priors for MRI Reconstruction Trained from Magnitude-Only Images | |
Lu et al. | A dictionary learning method with total generalized variation for MRI reconstruction | |
Yaman et al. | Improved supervised training of physics-guided deep learning image reconstruction with multi-masking | |
Rashid et al. | Single MR image super-resolution using generative adversarial network | |
Bian et al. | Deep parallel MRI reconstruction network without coil sensitivities | |
CN113509165B (en) | Complex rapid magnetic resonance imaging method based on CAR2UNet network | |
Dai et al. | Deep compressed sensing MRI via a gradient‐enhanced fusion model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |