CN109993809A - Rapid magnetic resonance imaging method based on residual error U-net convolutional neural networks - Google Patents
Rapid magnetic resonance imaging method based on residual error U-net convolutional neural networks Download PDFInfo
- Publication number
- CN109993809A CN109993809A CN201910201305.1A CN201910201305A CN109993809A CN 109993809 A CN109993809 A CN 109993809A CN 201910201305 A CN201910201305 A CN 201910201305A CN 109993809 A CN109993809 A CN 109993809A
- Authority
- CN
- China
- Prior art keywords
- data
- residual error
- indicate
- neural networks
- convolutional neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
- A61B5/055—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7253—Details of waveform analysis characterised by using transforms
- A61B5/7257—Details of waveform analysis characterised by using transforms using Fourier transforms
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/005—Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/30—Assessment of water resources
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Public Health (AREA)
- Mathematical Physics (AREA)
- Medical Informatics (AREA)
- Pathology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Heart & Thoracic Surgery (AREA)
- Theoretical Computer Science (AREA)
- Veterinary Medicine (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Psychiatry (AREA)
- Evolutionary Computation (AREA)
- Physiology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- General Physics & Mathematics (AREA)
- High Energy & Nuclear Physics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Radiology & Medical Imaging (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Fuzzy Systems (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses the rapid magnetic resonance imaging method based on residual error U-net convolutional neural networks, the present invention includes the preparation, the training based on residual error U-net convolutional neural networks, three steps of image reconstruction based on residual error U-net convolutional neural networks of training data.Using the method for the present invention, by the way that U-net convolutional neural networks are added in residual error module, the problems such as gradient that can solve U-net convolutional neural networks disappears, over-fitting and convergence rate are slow, the quality of the quick MRI imaging of progress based on U-net convolutional neural networks is improved.
Description
Technical field
The invention belongs to field of magnetic resonance imagings, and it is total to be related to a kind of quick magnetic based on residual error U-net convolutional neural networks
Shake imaging method.
Background technique
In the past 20 years, magnetic resonance imaging (Magnetic Resonance Imaging, MRI) is because of its soft group with higher
Resolution ratio is knitted, does not have the advantages that ionization radiation injury to quickly grow human body.But since the image taking speed of MRI is slower, imaging
The physiological movement etc. of examinee often will cause imaging artefacts in the process, it is difficult to meet the requirement of real time imagery, therefore how
The image taking speed for accelerating MRI is one of the hot spot of MRI theory and technology research.
Researcher usually shortens the data acquisition time of MRI in terms of three, first is that the performance of MRI hardware is improved,
Enhance the main field strength and gradient switch speed of magnetic resonance scanner, but since the physiological effect of human body limits, Bu Nengwu
It improves magnetic field strength and accelerates the stripping and slicing rate of magnetic field gradient in limitation ground;Second is that using parallel imaging (Parallel Imaging) skill
Art, such as: SENSE (Sensitivity Encoding) method rebuild based on image area and the GRAPPA based on k-space domain
(GeneRalized Autocalibrating Partial Parallel Acquisition) method etc., due to coil sensitivity
Spend it is limited, when the signal-to-noise ratio of image is very low or accelerate acquisition the factor it is bigger when, the performance of these algorithm for reconstructing is limited;
Third is that reduce data collection capacity, such as the regular lack sampling based on partial k-space, be based on compressed sensing (Compressed
Sensing, CS) theoretical random lack sampling and Radial the and Spiral lack sampling based on non-Cartesian sample track etc.,
But largely reducing for data collection capacity, being remarkably decreased for picture quality can be brought, although researcher can be calculated by a variety of reconstructions
Method improves the quality that undersampled image is rebuild, but generally requires longer reconstruction time, it is difficult to meet the clinic of real-time reconstruction
Demand.
Since 2016, the method based on convolutional neural networks starts to be applied in FastMRI field, the party
Method is learnt and is trained to convolutional neural networks using a large amount of prior information, and the network parameter optimized utilizes training
Good convolutional neural networks can quickly reconstruct high quality MRI image, be a kind of quick imaging side MRI for having very much application potential
Method.
The patent for the quick MRI imaging aspect based on convolutional neural networks applied at present has: MR imaging method
With system (application number: CN201310633874.6), sampled point is estimated in k-space based on depth network model and is not sampled
Mapping relations between point, to estimate complete k-space data to rebuild magnetic resonance image;One kind is based on depth convolution mind
Rapid magnetic resonance imaging method and device (application number: CN201580001261.8) through network are proposed based on depth convolution mind
FastMRI is realized through network;A kind of MR imaging method and device (application number: CN201710236330.4),
It proposes to train network based on deficient adopt with MRI image is adopted entirely of more contrasts;A kind of more contrasts based on convolutional neural networks
MR image reconstruction method (application number: CN201711354454.9) is proposed using more contrast MRI images come training convolutional
Neural network;A kind of MR imaging method and system (application number: CN201611006925.2), are mentioned based on deep learning method
The quality and speed of high CS-MRI image reconstruction;Based on machine learning parallel MR imaging GRAPPA method (application number:
CN201210288373.4), a kind of parallel MR imaging GRAPPA method based on machine learning is provided;Based on depth convolution
The one-dimensional division Fourier's parallel MR imaging method (application number: CN201710416357.1) and one kind of net are based on depth
It practises and the magnetic resonance reconstruction method (application number: CN201810306848.5) of convex set projection, the two patents are by deep learning side
Method is applied to magnetic resonance parallel imaging field;A kind of magnetic resonance image super-resolution reconstruction method based on enhancing recurrence residual error network
(application number: CN201810251558.5) establishes recurrence residual error network by basic unit of recursive residual error module repeatedly, obtains
Magnetic resonance Super-resolution Reconstruction effect.That has applied at present is had based on U-net convolutional neural networks deep learning patent: one kind is based on
The medical image cutting method (application number: CN201810203917.X) of the U-shaped convolutional neural networks of binary channel, is mainly used for medicine
Image segmentation.It also fails to inquire any rapid magnetic resonance imaging method based on residual error U-net convolutional neural networks at present
Authorize patent of invention or application.
The article for the quick MRI imaging aspect based on convolutional neural networks deep learning delivered both at home and abroad has: 2016
Year, Wang S et al. proposes to carry out rapid magnetic-resonance image reconstruction (Wang S, et based on convolutional neural networks
al.Accelerating magnetic resonance imaging via deep learning,in Proc.IEEE
13th Int.Conf.Biomedical Imaging,pp.514–517,2016.).Yu S et al. is proposed based on generation confrontation net
The deep learning method of network accelerates CS-MRI to rebuild (Yu S, Dong H, Yang G, et al.Deep de-aliasing for
fast compressive sensing MRI.arXiv preprint arXiv:1705.07137,2017.).Yang Y etc.
People proposes that broad sense operation operator is added in the non-linear conversion layer of Generic-ADMM-Net network constitutes Complex-ADMM-
Net realizes image reconstruction (Yang Y, et al.ADMM-Net:A deep learning approach for
compressive sensing MRI.arXiv:1705.06869v1,2017.).2017, Lee D et al. proposed that depth is pseudo-
Shadow learning network is used for CS-MRI parallel imaging (Lee D, Yoo J, Ye J C.Deep artifact learning for
Compressed sensing and parallel MRI.arXiv preprint arXiv:1703.01120,2017.), lead to
Amplitude network and phase network direct estimation aliasing artefacts are crossed, the aliasing artefacts that the image that lack sampling is rebuild can be subtracted to estimation come
Obtain aliasing-free image.Hammernik K et al. proposes a kind of depth variation network and the MRI based on parallel imaging is accelerated to rebuild
(Hammernik K et al.Learning a variational network for reconstruction of
accelerated MRI data,Magn.Reson.Med.vol.79,no.6,pp.3055-3071,2017.)。
Article in terms of the MRI fast imaging based on U-net convolutional neural networks delivered at present has: Jin K H etc.
People (Jin K H, et al.Deep convolutional neural network for inverse problems in
Imaging.IEEE Transactions on Image Processing, 2017,26 (9): 4509-4522.) it proposes to be based on
The deep learning network structure of filtered back projection solves the inverse problem in imaging, and basic network therein is U-net network structure.
2018, Yang G et al. propose the generation based on U-net neural network confrontation network DAGAN, for CS-MRI at
As (Yang G, et al.Dagan:Deep de-aliasing generative adversarial networks for
fast compressed sensing MRI reconstruction.IEEE Transactions on Medical
Imaging,2018,37(6):1310-1321.).Hyun C M et al. (Hyun C M, Kim H P, Lee S M, et
al.Deep learning for undersampled MRI reconstruction.Physics in medicine and
Biology, 2018.) mathematical theory foundation is provided for U-net convolutional neural networks MRI fast imaging.Corey-Zumar will
U-net convolutional neural networks be applied to brain and prostate image reconstruction (https://github.com/Corey-Zumar/ MRIReconstruction/tree/master/sub-mrine)。
Article in terms of the MRI fast imaging based on residual error convolutional neural networks delivered at present has: 2017,
Mardani M et al. proposes that circulation residual error network is used to compress recovery (Mardani M, Gong E, the Cheng J of MRI image
Y,et al.Deep generative adversarial networks for compressed sensing automates
MRI.arXiv preprint arXiv:1706.00051,2017.).2018, Sun L et al. proposed a kind of recurrence expansion web
Network carries out CS-MRI imaging, and the calculating based on residual error to obtain in the case where parameter is less bigger using empty convolution
Receptive field (Sun L, Fan Z, Huang Y, et al.Compressed sensing MRI using a recursive
dilated network,Thirty-Second AAAI Conference on Artificial
Intelligence.2018.).Lee D et al. proposes a kind of depth residual error mode of learning, using MRI image amplitude information and
Phase information is rebuild (Lee D, Yoo J, Tak S, et al.Deep residual learning for
accelerated MRI using magnitude and phase networks.IEEE Transactions on
Biomedical Engineering,2018,65(9):1985-1995.)。
The invention of the article in terms of the quick MRI of convolutional neural networks deep learning imaging delivered above or application
Patent, be mainly based upon the deep learning method of convolutional neural networks or U-net convolutional neural networks carry out quick MRI at
Picture, or in convolutional neural networks be added residual error calculating, do not occurred it is any by residual error module and U-net convolution mind
The patent or article of the quick MRI imaging aspect combined through network.
Summary of the invention
The present invention is directed to deficiency of the existing U-net convolutional neural networks on magnetic resonance fast imaging method, to existing
U-net convolutional neural networks are improved, and 4 residual error modules are added in the coded portion of U-net convolutional neural networks, solve
Gradient disappearance problem when U-net convolutional neural networks backpropagation improves the image reconstruction quality of magnetic resonance lack sampling data;
In addition, there is a problem of that amplitude of fluctuation is excessive in the updating for optimization loss function, the present invention uses Adam (Adaptive
Moment Estimation) optimization algorithm come replace conventional SGD (StochasticGradient Descent) optimization calculate
Method, can further speed up the convergence rate of U-net convolutional neural networks, and can effectively prevent the problem of training terminates too early;Net
Using the initial method of transfer learning when network parameter initialization, over-fitting is reduced;To learn using polynomial decay strategy
Rate can steadily decline, and as the increase decline of wheel number is faster.
The present invention includes three steps: the preparation of training data, the training based on residual error U-net convolutional neural networks, base
In the image reconstruction of residual error U-net convolutional neural networks.
Step 1: the preparation of training data
The preparation of training data includes 3 steps: fully sampled data acquisition, simulation lack sampling, zero filling are rebuild.
Step 1-1: fully sampled data acquisition
Fully sampled k-space data Sr(kx, ky) indicate, wherein kxIndicate k-space frequency coding FE (Frequency
Encoding) the position in direction, kyIt indicates in the position in the direction PE (Phase Encoding), by inverse discrete fourier transform
(IDFT) reference picture I is obtainedref(x, y):
Iref(x, y)=IDFT (Sr(kx, ky)) [1]
Step 1-2: simulation lack sampling
Regular simulation lack sampling is carried out to k-space data, in the phase-encoding direction of k-space, the i.e. direction PE, every N
(N is the integer greater than 1) row acquisition data line, acquires 4% that PE encodes total line number in k-space central area, in the side FE entirely
To data be all to acquire entirely, use Su(kx, ky) indicate collected lack sampling k-space data.With undersampling template mask and entirely
Sampled k-space data matrix Sr(kx, ky) carry out dot product mode obtain simulation lack sampling data, can be formulated as:
Su(kx, ky)=Sr(kx, ky).*mask(x,ky) [2]
Wherein, each of undersampling template mask point mask (kx,ky) homography Sr(kx,kyEach of)
Point, it is 1 that the point for needing to sample, which corresponds to the value in mask, and the value in the corresponding mask of the point not acquired is 0:
Step 1-3: zero filling is rebuild
For lack sampling data Su(kx,ky), corresponding to does not have the point for carrying out data acquisition to enable its value in k-space be 0, so
Image reconstruction is carried out with inverse discrete fourier transform afterwards, zero filling reconstruction image is obtained, uses Iinput(x, y) is indicated:
Iinput(x, y)=IDFT (Su(kx,ky)) [4]
In this way, just got a pair of of training data, i.e., fully sampled data Iref(x, y) and simulation lack sampling data Iinput
(x,y)。
Step 2: the training based on residual error U-net convolutional neural networks
It include 2 steps based on the training of residual error convolutional neural networks: the building of residual error U-net convolutional neural networks, network instruction
Practice.
Step 2-1: residual error U-net convolutional neural networks building
The building of residual error U-net convolutional neural networks includes 3 steps: extracting feature, residual computations, transposition using convolutional layer
Convolution and merging.
Step 2-1-1: feature is extracted using convolutional layer
Convolutional layer includes convolution (conv), batch standardization (batch normalization, BN) and activates
(activation) three steps operate.
Convolution Formula is as follows:
zl=cl-1*Wl+bl [5]
Wherein, * indicates convolution;The size of convolution kernel W is s × kl×kl×ml;S indicates l-1 layers of feature subgraph quantity;
klIndicate the size of l layers of filter;mlIndicate the quantity of l layers of filter;blIndicate l layers of amount of bias;zlIt indicates to pass through
L layers of output after convolution;cl-1Indicate l-1 layers of characteristic pattern.
It is as follows to criticize standardization formula:
Wherein μ, ρ are the mean value and variance of batch data respectively;It is normalization output;γ, β are empirical parameter;T is lot number
According to size.
Activate formula as follows:
Wherein, σ is activation primitive.
Step 2-1-2: residual computations
Residual computations formula is as follows:
yl=h (cl)+F(cl,wl) [8-1]
cl+1=σ (yl) [8-2]
Wherein, clAnd cl+1It is outputting and inputting for first of residual unit, each residual unit recurring formula [8-1],
[8-2] twice;Residual error function is indicated with F;wlFor residual error network parameter;h(cl) indicate input clIdentical mapping;ylIndicate residual
Poor layer calculates intermediate result.
Step 2-1-3: transposition convolution and merging
Step 2-1-2 is repeated 4 times, and constitutes 4 residual error modules, by the 4th residual error module output up-sample, equally into
No. 4 transposition convolution operations of row, by the convolutional layer c in residual error modulelMerge with transposition convolutional layer, may be expressed as: cu=concat (σ
(cl+1*Wl+bl),cl) [9]
Wherein, clIndicate the input of residual unit, σ (cl+1*Wl+bl) indicating the output of transposition convolutional layer, concat operates table
Show and merges two parts output characteristic pattern;cuIndicate output.
Step 2-2: network training
Network training includes 3 steps: determining loss function, setting cycling condition, iteration optimization.
Step 2-2-1: loss function is determined
Mean square error (Mean Squared Error, the MSE) loss function of function as backpropagation is chosen, damage is passed through
Lose the penalty values loss that function calculates output layer.For training datasetPenalty values loss
It is indicated with mean square error function:
Wherein T indicates batch data size, and subscript j indicates that j-th of image in batch data, j=(1,2 ... T), θ indicate network
Parameter.
Step 2-2-2: setting cycling condition
If cycle-index is n, calculates penalty values and loses Rule of judgment of the difference Dif of threshold value as circulation:
Wherein τ indicates loss threshold value.
If Dif is more than or equal to 0, circulation executes step 2-2-3, until Dif is less than 0 or the number of iterations reaches setting number
N, iterative cycles terminate;The network parameter θ optimized by the backpropagation training network of network.
Step 2-2-3: iteration optimization
For training datasetParameter optimization is carried out using Adam algorithm, process is such as
Under:
dt=β1dt-1+(1-β1)gt [12-2]
Wherein use gtIndicate penalty values loss t batches of gradient average value;Expression parameter gradient;dt、vtRespectively indicate gt
Single order moments estimation and second order moments estimation;β1、β2Indicate empirical parameter;It is the correction to them;Lr indicates learning rate,
∈ indicates that preventing denominator is 0 parameter.
Learning rate is declined using polynomial decay mode, and formula is as follows:
Wherein epoch indicates that study wheel number, max_epoch indicate maximum study wheel number;Indicate index parameters item
Step 3: the image reconstruction based on residual error U-net convolutional neural networks.
With trained residual error U-net convolutional neural networks to zero filling lack sampling test data Itest(x, y) carries out weight
It builds, reconstructed results Ioutput(x, y) is indicated:
Ioutput(x, y)=Res_Unet(Itest(x,y),θ) [14]
Wherein, Res_UnetIndicate residual error U-net network.
Ioutput(x, y) obtains k-space data by Discrete Fourier Transform, uses Sp(kx,ky) indicate, for Sp(kx,ky)
The data S that the middle point for carrying out data acquisition is arrived with actual acquisitionu(kx,ky) replacement Sp(kx,ky) in corresponding position data, so
Image reconstruction is carried out using inverse discrete fourier transform (IDFT) afterwards, uses Icor(x, y) indicates final image reconstruction result:
Icor(x, y)=IDFT (Su(kx,ky)+Sp(kx,ky)(1-mask(kx,ky))) [15]
Using the method for the present invention, i.e., the rapid magnetic resonance imaging method based on residual error U-net convolutional neural networks is adopted in drop
The sample stage extracts feature using residual error network, and residual error network can effectively prevent gradient to disappear, and is easier to train.Based on having trained
The residual error U-net convolutional neural networks of good parameter are significantly reducing k-space data acquisition points, are shortening k-space data acquisition
In the case where time, can real-time reconstruction go out the MRI image of high quality.While the invention has the characteristics that:
1) residual error module is added to the precision for improving feature extraction in U-net convolutional neural networks, and can use and move
The method initiation parameter for moving study, effectively prevent over-fitting.
2) the concussion problem for SGD algorithm in current convolutional neural networks in loss function variation, the present invention use
Adam algorithm optimizes, so that loss function is smoothened, help to obtain better Optimal Parameters.
3) it decline learning rate can steadily using polynomial decay strategy, be beneficial to training and obtain more preferably parameter, improve
Reconstructed image quality.
4) present invention can carry out fast and high quality reconstruction to regular lack sampling k-space data, owe to adopt at random compared to CS
Sample, collecting method of the invention is more simple, is easy to hardware realization.
Detailed description of the invention
Fig. 1 is the schematic diagram that data acquisition is carried out using the present invention;
Fig. 2 is network structure of the invention;
Fig. 3 is the comparative result figure of image reconstruction example;
Specific embodiment
The present invention includes three steps: the preparation of training data, the training based on residual error U-net convolutional neural networks, base
In the image reconstruction of residual error U-net convolutional neural networks.
Step 1: the preparation of training data
The preparation of training data includes 3 steps: fully sampled data acquisition, simulation lack sampling, zero filling are rebuild.
Step 1-1: fully sampled data acquisition
Fully sampled k-space data Sr(kx,ky) indicate (shown in such as Fig. 1 (a)), wherein kxIndicate that k-space frequency is compiled
The position in code direction FE (Frequency Encoding), kyIt indicates to pass through in the position in the direction PE (Phase Encoding)
Inverse discrete fourier transform (IDFT) obtains reference picture Iref(x, y):
Iref(x, y)=IDFT (Sr(kx,ky)) [1]
Step 1-2: simulation lack sampling
Regular simulation lack sampling is carried out to k-space data, in the phase-encoding direction of k-space, the i.e. direction PE, every N
(N is the integer greater than 1) row acquisition data line, acquires 4% that PE encodes total line number in k-space central area, in the side FE entirely
To data be all to acquire entirely, use Su(kx,ky) indicate collected lack sampling k-space data (shown in such as Fig. 1 (c)).With owing to adopt
Original mold plate mask (shown in such as Fig. 1 (b)) and fully sampled k-space data matrix Sr(kx,ky) carry out dot product mode obtain simulation owe
Sampled data can be formulated as:
Su(kx,ky)=Sr(kx,ky).*mask(kx,ky) [2]
Wherein, each of undersampling template mask point mask (kx,ky) homography Sr(kx,kyEach of)
Point, it is 1 that the point for needing to sample, which corresponds to the value in mask, and the value in the corresponding mask of the point not acquired is 0:
Step 1-3: zero filling is rebuild
For lack sampling data Su(kx,ky), corresponding to does not have the point for carrying out data acquisition to enable its value in k-space be 0, so
Image reconstruction is carried out with inverse discrete fourier transform afterwards, zero filling reconstruction image is obtained, uses Iinput(x, y) is indicated:
Iinput(x, y)=IDFT (Su(kx,ky)) [4]
In this way, just got a pair of of training data, i.e., fully sampled data Iref(x, y) and simulation lack sampling data Iinput
(x,y)。
Step 2: the training based on residual error U-net convolutional neural networks
It include 2 steps based on the training of residual error convolutional neural networks: the building of residual error U-net convolutional neural networks, network instruction
Practice.
Step 2-1: residual error U-net convolutional neural networks building
The building of residual error U-net convolutional neural networks includes 3 steps: extracting feature, residual computations, transposition using convolutional layer
Convolution and merging.
Step 2-1-1: feature is extracted using convolutional layer
Convolutional layer includes convolution (conv), batch standardization (batch normalization, BN) and activates
(activation) three steps operation (as shown in Figure 2).
Convolution Formula is as follows:
zl=cl-1*Wl+bl [5]
Wherein, * indicates convolution;The size of convolution kernel W is s × kl×kl×ml;S indicates l-1 layers of feature subgraph quantity;
klIndicate the size of l layers of filter;mlIndicate the quantity of l layers of filter;blIndicate l layers of amount of bias;zlIt indicates to pass through
L layers of output after convolution;cl-1Indicate l-1 layers of characteristic pattern.
It is as follows to criticize standardization formula:
Wherein μ, ρ are the mean value and variance of batch data respectively;It is normalization output;γ, β are empirical parameter;T is lot number
According to size.
Activate formula as follows:
Wherein, σ is activation primitive.
Step 2-1-2: residual computations
Residual computations formula is as follows:
yl=h (cl)+F(cl,wl) [8-1]
cl+1=σ (yl) [8-2]
Wherein, clAnd cl+1It is outputting and inputting for first of residual unit, each residual unit recurring formula [8-1],
[8-2] twice;Residual error function is indicated with F;wlFor residual error network parameter;h(cl) indicate input clIdentical mapping;ylIndicate residual
Poor layer calculates intermediate result.
Step 2-1-3: transposition convolution and merging
Step 2-1-2 is repeated 4 times, and constitutes 4 residual error modules (as shown in Figure 2), and the output of the 4th residual error module is carried out
Sampling equally carries out No. 4 transposition convolution operations (as shown in Figure 2), by the convolutional layer c in residual error modulelWith transposition convolutional layer into
Row 4 times merging, specifically, since the last layer transposition convolution output channel number and it is down-sampled in first convolutional layer port number
It is inconsistent, first convolutional layer output characteristic pattern is carried out after convolution in conjunction with may be expressed as: cu=concat (σ (cl+1*Wl+bl),
cl) [9]
Wherein, clIndicate the input of residual unit, σ (cl+1*Wl+bl) indicating the output of transposition convolutional layer, concat operates table
Show and merges two parts output characteristic pattern;cuIndicate output.
Step 2-2: network training
Network training includes 3 steps: determining loss function, setting cycling condition, loop iteration.
Step 2-2-1: loss function is determined
Mean square error (Mean Squared Error, the MSE) loss function of function as backpropagation is chosen, damage is passed through
Lose the penalty values loss that function calculates output layer.For training datasetPenalty values loss
It is indicated with mean square error function:
Wherein T indicates batch data size, and subscript j indicates that j-th of image in batch data, j=(1,2 ... T), θ indicate network
Parameter.
Step 2-2-2: setting cycling condition
If cycle-index is n, calculates penalty values and loses Rule of judgment of the difference Dif of threshold value as circulation:
Wherein τ indicates loss threshold value.
If Dif is more than or equal to 0, circulation executes step 2-2-3, until Dif is less than 0 or the number of iterations reaches setting number
N, iterative cycles terminate;The network parameter θ optimized by the backpropagation training network of network.
Step 2-2-3: loop iteration
For training datasetParameter optimization is carried out using Adam algorithm, process is such as
Under:
dt=β1dt-1+(1-β1)gt [12-2]
Wherein use gtIndicate penalty values loss t batches of gradient average value;Expression parameter gradient;dt、vtRespectively indicate gt
Single order moments estimation and second order moments estimation;β1、β2Indicate empirical parameter;It is the correction to them;Lr indicates learning rate,
∈ indicates that preventing denominator is 0 parameter.
Learning rate is declined using polynomial decay mode, and formula is as follows:
Wherein epoch indicates that study wheel number, max_epoch indicate maximum study wheel number;Indicate index parameters item
Step 3: the image reconstruction based on residual error U-net convolutional neural networks.
With trained residual error U-net convolutional neural networks to zero filling lack sampling test data Itest(x, y) carries out weight
It builds, reconstructed results Ioutput(x, y) is indicated:
Ioutput(x, y)=Res_Unet(Itest(x,y),θ) [14]
Wherein, Res_UnetIndicate residual error U-net network.
Ioutput(x, y) obtains k-space data by Discrete Fourier Transform, uses Sp(kx,ky) indicate, for Sp(kx,ky)
The data S that the middle point for carrying out data acquisition is arrived with actual acquisitionu(kx,ky) replacement Sp(kx,ky) in corresponding position data, so
Image reconstruction is carried out using inverse discrete fourier transform (IDFT) afterwards, uses Icor(x, y) indicates final image reconstruction result:
Icor(x, y)=IDFT (Su(kx,ky)+Sp(kx,ky)(1-mask(kx,ky))) [15]
Below in conjunction with the MRI data of human body head, to based on residual error U-net convolutional neural networks FastMRI side
Method is illustrated, as shown in Figure 3.Assuming that the MRI image S to be acquiredref(kx,ky) matrix size be kx×ky=256
× 256, collected data progress inverse Fourier transform is obtained into reference picture Iref(x,y);In the phase code PE of k-space
Direction acquires a line k-space data every N=4 row, and fully sampled in the k-space central area that information content is concentrated progress, adopts altogether
Collect 14 row phase-coded datas and obtains lack sampling k-space data Su(kx,ky);To collected lack sampling data Su(kx,ky) into
The conventional zero filling Fourier reconstruction of row, obtains zero filling reconstruction image Iinput(x,y);Acquire 2850 Sref(kx,ky) a data,
Take 2650 establishments training set thereinThen residual error U-net convolutional neural networks are constructed,
Mainly comprising convolutional layer, three big steps of residual computations, transposition convolution and merging.After residual error U-net convolutional neural networks are built,
It is trained using training data, when network training error is less than loss threshold value or frequency of training reaches n, training terminates, and obtains
Residual error U-net convolutional neural networks after to parameter optimization.Trained residual error U-net convolutional neural networks are used to test number
According to image reconstruction (taken in 2850 data 200 as test data), final output image I is obtained after correctioncor(x,y)。
The CPU for testing computer used is i5-4460, dominant frequency 3.2GHz, 16G memory, and video card model GTX1080 is shown
8g is deposited, training data is 2850 brain images, and every image size is 256*256, and test data is 200 brain images,
Training time is about 2 hours, and reconstruction time is about 1.2s.
Upper layer and lower layer as shown in Figure 3 respectively indicate reconstruction figure and corresponding differential chart, and figure (a) is with reference to figure, and scheming (b) is to fill out
Zero rebuilds figure, and figure (c) is the reconstruction figure based on U-net convolutional neural networks, and figure (d) is the reconstruction figure using the method for the present invention,
Figure (e), figure (f) and scheme (g) respectively representing figure (b), figure (c) and scheme (d) and figure (a) differential chart, from differential chart it can be seen that
The reconstruction quality of the method for the present invention rebuilds figure and the reconstruction figure based on U-net convolutional neural networks better than zero filling.
The TRE error for rebuilding figure based on the method for the present invention is 0.6749, rebuilds figure based on U-net convolutional neural networks
TRE error is 0.8080, and the TRE error that figure is rebuild in zero filling is that 1.6215, TRE error formula is as follows:
It can be seen that the present invention, using trained parameter, can be carried out by being trained to residual error U-net convolutional neural networks
The fast and high quality of MRI lack sampling data is imaged, compared to the imaging method based on U-net convolutional neural networks, the method for the present invention
There is better image quality.
Claims (1)
1. the rapid magnetic resonance imaging method based on residual error U-net convolutional neural networks, which is characterized in that this method specifically includes
The following three steps: the preparation of training data, the training based on residual error U-net convolutional neural networks, be based on residual error U-net convolution
The image reconstruction of neural network;
Step 1: the preparation of training data
The preparation of training data includes 3 steps: fully sampled data acquisition, simulation lack sampling, zero filling are rebuild;
Step 1-1: fully sampled data acquisition
Fully sampled k-space data Sr(kx, ky) indicate, wherein kxIndicate the position in the direction k-space frequency coding FE, kyTable
Show the position in the direction phase code PE, obtains reference picture I by inverse discrete fourier transformref(x, y):
Iref(x, y)=IDFT (Sr(kx,ky)) [1]
Step 1-2: simulation lack sampling
Regular simulation lack sampling is carried out to k-space data to adopt in the phase-encoding direction of k-space, the i.e. direction PE every N row
Collect data line, wherein N is the integer greater than 1,4% row that PE encodes total line number is acquired entirely in k-space central area, in the side FE
To data be all to acquire entirely, use Su(kx,ky) indicate collected lack sampling k-space data;With undersampling template mask and entirely
Sampled k-space data matrix Sr(kx,ky) carry out dot product mode obtain simulation lack sampling data, be formulated are as follows:
Su(kx,ky)=Sr(kx,ky).*mask(kx,ky) [2]
Wherein, each of undersampling template mask point mask (kx,ky) homography Sr(kx,ky) each of point, need
Value in the corresponding mask of the point to be sampled is 1, and the value in the corresponding mask of point not acquired is 0:
Step 1-3: zero filling is rebuild
For lack sampling data Su(kx,ky), corresponding to does not have the point for carrying out data acquisition to enable its value in k-space be 0, is then used
Inverse discrete fourier transform carries out image reconstruction, obtains zero filling reconstruction image, uses Iinput(x, y) is indicated:
Iinput(x, y)=IDFT (Su(kx,ky)) [4]
In this way, just got a pair of of training data, i.e., fully sampled data Iref(x, y) and simulation lack sampling data Iinput(x,
y);
Step 2: the training based on residual error U-net convolutional neural networks
It include 2 steps based on the training of residual error convolutional neural networks: the building of residual error U-net convolutional neural networks, network training;
Step 2-1: residual error U-net convolutional neural networks building
The building of residual error U-net convolutional neural networks includes 3 steps: extracting feature, residual computations, transposition convolution using convolutional layer
And merge;
Step 2-1-1: feature is extracted using convolutional layer
Convolutional layer includes convolution, batch standardization and activation three steps operation;
Wherein Convolution Formula is as follows:
zl=cl-1*Wl+bl [5]
Wherein, * indicates convolution;The size of convolution kernel W is s × kl×kl×ml;S indicates l-1 layers of feature subgraph quantity;klIt indicates
The size of l layers of filter;mlIndicate the quantity of l layers of filter;blIndicate l layers of amount of bias;zlIt indicates after convolution
L layers output;cl-1Indicate l-1 layers of characteristic pattern;
It is as follows wherein to criticize standardization formula:
Wherein μ, ρ are the mean value and variance of batch data respectively;It is normalization output;γ, β are empirical parameter;T is that batch data is big
It is small;
Wherein activation formula is as follows:
Wherein, σ is activation primitive;
Step 2-1-2: residual computations
Residual computations formula is as follows:
yl=h (cl)+F(cl,wl) [8-1]
cl+1=σ (yl) [8-2]
Wherein, clAnd cl+1It is outputting and inputting for first of residual unit, each residual unit recurring formula [8-1], [8-2]
Twice;Residual error function is indicated with F;wlFor residual error network parameter;h(cl) indicate input clIdentical mapping;ylIndicate residual error layer meter
Calculate intermediate result;
Step 2-1-3: transposition convolution and merging
Step 2-1-2 is repeated 4 times, and constitutes 4 residual error modules, and the output of the 4th residual error module is up-sampled, equally carries out 4
Secondary transposition convolution operation, by the convolutional layer c in residual error modulelMerge with transposition convolutional layer, indicate are as follows:
cu=concat (σ (cl+1*Wl+bl),cl) [9]
Wherein, clIndicate the input of residual unit, σ (cl+1*Wl+bl) indicating the output of transposition convolutional layer, concat is operated two
Output characteristic pattern is divided to merge;cuIndicate output;
Step 2-2: network training
Network training includes 3 steps: determining loss function, setting cycling condition, iteration optimization;
Step 2-2-1: loss function is determined
Loss function of the mean square error function as backpropagation is chosen, the penalty values of output layer are calculated by loss function
loss;For training datasetPenalty values are indicated with mean square error function:
Wherein T indicates batch data size, and subscript j indicates that j-th of image in batch data, j=1,2 ... T, θ indicate network parameter;
Step 2-2-2: setting cycling condition
If cycle-index is n, calculates penalty values and loses Rule of judgment of the difference Dif of threshold value as circulation:
Wherein τ indicates loss threshold value;
If Dif is more than or equal to 0, circulation executes step 2-2-3, until Dif is less than 0 or the number of iterations reaches setting frequency n, repeatedly
Generation circulation terminates;The network parameter θ optimized by the backpropagation training network of network;
Step 2-2-3: iteration optimization
For batch training datasetParameter optimization is carried out using Adam algorithm, process is as follows:
dt=β1dt-1+(1-β1)gt [12-2]
Wherein use gtIndicate penalty values loss t batches of gradient average value;Expression parameter gradient;dt、vtRespectively indicate gtOne
Rank moments estimation and second order moments estimation;β1、β2Indicate empirical parameter;It is to dt、vtCorrection;Lr indicates learning rate, ∈ table
It is 0 parameter that showing, which prevents denominator,;
Learning rate is declined using polynomial decay mode, and formula is as follows:
Wherein epoch indicates that study wheel number, max_epoch indicate maximum study wheel number;Indicate index parameters item,
Step 3: the image reconstruction based on residual error U-net convolutional neural networks;
With trained residual error U-net convolutional neural networks to zero filling lack sampling test data Itest(x, y) is rebuild, weight
Build result Ioutput(x, y) is indicated:
Ioutput(x, y)=Res_Unet(Itest(x, y), θ) [14]
Wherein, Res_UnetIndicate residual error U-net network;
As a result Ioutput(x, y) obtains k-space data by Discrete Fourier Transform, uses Sp(kx, ky) indicate, in k-space
Carried out the data S that the point of data acquisition is arrived with actual acquisitionu(kx, ky) replacement Sp(kx, ky) in corresponding position data, then
Image reconstruction is carried out using inverse discrete fourier transform, uses Icor(x, y) indicates final image reconstruction result:
Icor(x, y)=IDFT (Su(kx, ky)+Sp(kx, ky)(1-mask(kx, ky))) [15]。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910201305.1A CN109993809B (en) | 2019-03-18 | 2019-03-18 | Rapid magnetic resonance imaging method based on residual U-net convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910201305.1A CN109993809B (en) | 2019-03-18 | 2019-03-18 | Rapid magnetic resonance imaging method based on residual U-net convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109993809A true CN109993809A (en) | 2019-07-09 |
CN109993809B CN109993809B (en) | 2023-04-07 |
Family
ID=67129436
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910201305.1A Active CN109993809B (en) | 2019-03-18 | 2019-03-18 | Rapid magnetic resonance imaging method based on residual U-net convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109993809B (en) |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110349237A (en) * | 2019-07-18 | 2019-10-18 | 华中科技大学 | Quick body imaging method based on convolutional neural networks |
CN110717958A (en) * | 2019-10-12 | 2020-01-21 | 深圳先进技术研究院 | Image reconstruction method, device, equipment and medium |
CN110916664A (en) * | 2019-12-10 | 2020-03-27 | 电子科技大学 | Rapid magnetic resonance image reconstruction method based on deep learning |
CN110942496A (en) * | 2019-12-13 | 2020-03-31 | 厦门大学 | Propeller sampling and neural network-based magnetic resonance image reconstruction method and system |
CN110969633A (en) * | 2019-11-28 | 2020-04-07 | 南京安科医疗科技有限公司 | Automatic optimal phase recognition method for cardiac CT imaging |
CN110992318A (en) * | 2019-11-19 | 2020-04-10 | 上海交通大学 | Special metal flaw detection system based on deep learning |
CN111028306A (en) * | 2019-11-06 | 2020-04-17 | 杭州电子科技大学 | AR2U-Net neural network-based rapid magnetic resonance imaging method |
CN111105423A (en) * | 2019-12-17 | 2020-05-05 | 北京小白世纪网络科技有限公司 | Deep learning-based kidney segmentation method in CT image |
CN111123183A (en) * | 2019-12-27 | 2020-05-08 | 杭州电子科技大学 | Rapid magnetic resonance imaging method based on complex R2U _ Net network |
CN111358430A (en) * | 2020-02-24 | 2020-07-03 | 深圳先进技术研究院 | Training method and device for magnetic resonance imaging model |
CN111681296A (en) * | 2020-05-09 | 2020-09-18 | 上海联影智能医疗科技有限公司 | Image reconstruction method and device, computer equipment and storage medium |
CN112164122A (en) * | 2020-10-30 | 2021-01-01 | 哈尔滨理工大学 | Rapid CS-MRI reconstruction method for generating countermeasure network based on depth residual error |
CN112183718A (en) * | 2020-08-31 | 2021-01-05 | 华为技术有限公司 | Deep learning training method and device for computing equipment |
CN112419437A (en) * | 2019-11-29 | 2021-02-26 | 上海联影智能医疗科技有限公司 | System and method for reconstructing magnetic resonance images |
CN112437451A (en) * | 2020-11-10 | 2021-03-02 | 南京大学 | Wireless network flow prediction method and device based on generation countermeasure network |
CN112494029A (en) * | 2019-11-29 | 2021-03-16 | 上海联影智能医疗科技有限公司 | Real-time MR movie data reconstruction method and system |
ES2813777A1 (en) * | 2019-09-23 | 2021-03-24 | Quibim S L | METHOD AND SYSTEM FOR THE AUTOMATIC SEGMENTATION OF HYPERINTENSITIES OF WHITE SUBSTANCE IN BRAIN MAGNETIC RESONANCE IMAGES (Machine-translation by Google Translate, not legally binding) |
CN112734869A (en) * | 2020-12-15 | 2021-04-30 | 杭州电子科技大学 | Rapid magnetic resonance imaging method based on sparse complex U-shaped network |
CN112748382A (en) * | 2020-12-15 | 2021-05-04 | 杭州电子科技大学 | SPEED magnetic resonance imaging method based on CUNet artifact positioning |
WO2021083105A1 (en) * | 2019-10-29 | 2021-05-06 | 北京灵汐科技有限公司 | Neural network mapping method and apparatus |
CN112862787A (en) * | 2021-02-10 | 2021-05-28 | 昆明同心医联科技有限公司 | CTA image data processing method, device and storage medium |
CN112946545A (en) * | 2021-01-28 | 2021-06-11 | 杭州电子科技大学 | PCU-Net network-based fast multi-channel magnetic resonance imaging method |
CN113077527A (en) * | 2021-03-16 | 2021-07-06 | 天津大学 | Rapid magnetic resonance image reconstruction method based on undersampling |
CN113192150A (en) * | 2020-01-29 | 2021-07-30 | 上海交通大学 | Magnetic resonance interventional image reconstruction method based on cyclic neural network |
CN114266939A (en) * | 2021-12-23 | 2022-04-01 | 太原理工大学 | Brain extraction method based on ResTLU-Net model |
US20220114699A1 (en) * | 2020-10-09 | 2022-04-14 | The Regents Of The University Of California | Spatiotemporal resolution enhancement of biomedical images |
CN116597037A (en) * | 2023-05-22 | 2023-08-15 | 厦门大学 | Physical generation data-driven rapid magnetic resonance intelligent imaging method |
CN117409100A (en) * | 2023-12-15 | 2024-01-16 | 山东师范大学 | CBCT image artifact correction system and method based on convolutional neural network |
US11992289B2 (en) | 2020-10-01 | 2024-05-28 | Shanghai United Imaging Intelligence Co., Ltd. | Fast real-time cardiac cine MRI reconstruction with residual convolutional recurrent neural network |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108335339A (en) * | 2018-04-08 | 2018-07-27 | 朱高杰 | A kind of magnetic resonance reconstruction method based on deep learning and convex set projection |
CN108460726A (en) * | 2018-03-26 | 2018-08-28 | 厦门大学 | A kind of magnetic resonance image super-resolution reconstruction method based on enhancing recurrence residual error network |
GB201814358D0 (en) * | 2018-09-04 | 2018-10-17 | Perspectum Diagnostics Ltd | A method of analysing images |
CN108828481A (en) * | 2018-04-24 | 2018-11-16 | 朱高杰 | A kind of magnetic resonance reconstruction method based on deep learning and data consistency |
-
2019
- 2019-03-18 CN CN201910201305.1A patent/CN109993809B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108460726A (en) * | 2018-03-26 | 2018-08-28 | 厦门大学 | A kind of magnetic resonance image super-resolution reconstruction method based on enhancing recurrence residual error network |
CN108335339A (en) * | 2018-04-08 | 2018-07-27 | 朱高杰 | A kind of magnetic resonance reconstruction method based on deep learning and convex set projection |
CN108828481A (en) * | 2018-04-24 | 2018-11-16 | 朱高杰 | A kind of magnetic resonance reconstruction method based on deep learning and data consistency |
GB201814358D0 (en) * | 2018-09-04 | 2018-10-17 | Perspectum Diagnostics Ltd | A method of analysing images |
Cited By (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110349237A (en) * | 2019-07-18 | 2019-10-18 | 华中科技大学 | Quick body imaging method based on convolutional neural networks |
ES2813777A1 (en) * | 2019-09-23 | 2021-03-24 | Quibim S L | METHOD AND SYSTEM FOR THE AUTOMATIC SEGMENTATION OF HYPERINTENSITIES OF WHITE SUBSTANCE IN BRAIN MAGNETIC RESONANCE IMAGES (Machine-translation by Google Translate, not legally binding) |
WO2021058843A1 (en) * | 2019-09-23 | 2021-04-01 | Quibim, S.L. | Method and system for the automatic segmentation of white matter hyperintensities in brain magnetic resonance images |
CN110717958A (en) * | 2019-10-12 | 2020-01-21 | 深圳先进技术研究院 | Image reconstruction method, device, equipment and medium |
WO2021083105A1 (en) * | 2019-10-29 | 2021-05-06 | 北京灵汐科技有限公司 | Neural network mapping method and apparatus |
US11769044B2 (en) | 2019-10-29 | 2023-09-26 | Lynxi Technologies Co., Ltd. | Neural network mapping method and apparatus |
CN111028306A (en) * | 2019-11-06 | 2020-04-17 | 杭州电子科技大学 | AR2U-Net neural network-based rapid magnetic resonance imaging method |
CN111028306B (en) * | 2019-11-06 | 2023-07-14 | 杭州电子科技大学 | AR2U-Net neural network-based rapid magnetic resonance imaging method |
CN110992318A (en) * | 2019-11-19 | 2020-04-10 | 上海交通大学 | Special metal flaw detection system based on deep learning |
CN110969633A (en) * | 2019-11-28 | 2020-04-07 | 南京安科医疗科技有限公司 | Automatic optimal phase recognition method for cardiac CT imaging |
CN110969633B (en) * | 2019-11-28 | 2024-02-27 | 南京安科医疗科技有限公司 | Automatic optimal phase identification method for cardiac CT imaging |
CN112494029A (en) * | 2019-11-29 | 2021-03-16 | 上海联影智能医疗科技有限公司 | Real-time MR movie data reconstruction method and system |
CN112419437A (en) * | 2019-11-29 | 2021-02-26 | 上海联影智能医疗科技有限公司 | System and method for reconstructing magnetic resonance images |
CN110916664A (en) * | 2019-12-10 | 2020-03-27 | 电子科技大学 | Rapid magnetic resonance image reconstruction method based on deep learning |
CN110942496A (en) * | 2019-12-13 | 2020-03-31 | 厦门大学 | Propeller sampling and neural network-based magnetic resonance image reconstruction method and system |
CN111105423A (en) * | 2019-12-17 | 2020-05-05 | 北京小白世纪网络科技有限公司 | Deep learning-based kidney segmentation method in CT image |
CN111105423B (en) * | 2019-12-17 | 2021-06-29 | 北京小白世纪网络科技有限公司 | Deep learning-based kidney segmentation method in CT image |
CN111123183A (en) * | 2019-12-27 | 2020-05-08 | 杭州电子科技大学 | Rapid magnetic resonance imaging method based on complex R2U _ Net network |
CN111123183B (en) * | 2019-12-27 | 2022-04-15 | 杭州电子科技大学 | Rapid magnetic resonance imaging method based on complex R2U _ Net network |
CN113192150A (en) * | 2020-01-29 | 2021-07-30 | 上海交通大学 | Magnetic resonance interventional image reconstruction method based on cyclic neural network |
CN113192150B (en) * | 2020-01-29 | 2022-03-15 | 上海交通大学 | Magnetic resonance interventional image reconstruction method based on cyclic neural network |
CN111358430B (en) * | 2020-02-24 | 2021-03-09 | 深圳先进技术研究院 | Training method and device for magnetic resonance imaging model |
CN111358430A (en) * | 2020-02-24 | 2020-07-03 | 深圳先进技术研究院 | Training method and device for magnetic resonance imaging model |
CN111681296B (en) * | 2020-05-09 | 2024-03-22 | 上海联影智能医疗科技有限公司 | Image reconstruction method, image reconstruction device, computer equipment and storage medium |
CN111681296A (en) * | 2020-05-09 | 2020-09-18 | 上海联影智能医疗科技有限公司 | Image reconstruction method and device, computer equipment and storage medium |
CN112183718A (en) * | 2020-08-31 | 2021-01-05 | 华为技术有限公司 | Deep learning training method and device for computing equipment |
CN112183718B (en) * | 2020-08-31 | 2023-10-10 | 华为技术有限公司 | Deep learning training method and device for computing equipment |
US11992289B2 (en) | 2020-10-01 | 2024-05-28 | Shanghai United Imaging Intelligence Co., Ltd. | Fast real-time cardiac cine MRI reconstruction with residual convolutional recurrent neural network |
US20220114699A1 (en) * | 2020-10-09 | 2022-04-14 | The Regents Of The University Of California | Spatiotemporal resolution enhancement of biomedical images |
CN112164122B (en) * | 2020-10-30 | 2022-08-23 | 哈尔滨理工大学 | Rapid CS-MRI reconstruction method for generating countermeasure network based on depth residual error |
CN112164122A (en) * | 2020-10-30 | 2021-01-01 | 哈尔滨理工大学 | Rapid CS-MRI reconstruction method for generating countermeasure network based on depth residual error |
CN112437451A (en) * | 2020-11-10 | 2021-03-02 | 南京大学 | Wireless network flow prediction method and device based on generation countermeasure network |
CN112734869B (en) * | 2020-12-15 | 2024-04-26 | 杭州电子科技大学 | Rapid magnetic resonance imaging method based on sparse complex U-shaped network |
CN112748382A (en) * | 2020-12-15 | 2021-05-04 | 杭州电子科技大学 | SPEED magnetic resonance imaging method based on CUNet artifact positioning |
CN112734869A (en) * | 2020-12-15 | 2021-04-30 | 杭州电子科技大学 | Rapid magnetic resonance imaging method based on sparse complex U-shaped network |
CN112946545B (en) * | 2021-01-28 | 2022-03-18 | 杭州电子科技大学 | PCU-Net network-based fast multi-channel magnetic resonance imaging method |
CN112946545A (en) * | 2021-01-28 | 2021-06-11 | 杭州电子科技大学 | PCU-Net network-based fast multi-channel magnetic resonance imaging method |
CN112862787A (en) * | 2021-02-10 | 2021-05-28 | 昆明同心医联科技有限公司 | CTA image data processing method, device and storage medium |
CN113077527A (en) * | 2021-03-16 | 2021-07-06 | 天津大学 | Rapid magnetic resonance image reconstruction method based on undersampling |
CN114266939B (en) * | 2021-12-23 | 2022-11-01 | 太原理工大学 | Brain extraction method based on ResTLU-Net model |
CN114266939A (en) * | 2021-12-23 | 2022-04-01 | 太原理工大学 | Brain extraction method based on ResTLU-Net model |
CN116597037A (en) * | 2023-05-22 | 2023-08-15 | 厦门大学 | Physical generation data-driven rapid magnetic resonance intelligent imaging method |
CN117409100A (en) * | 2023-12-15 | 2024-01-16 | 山东师范大学 | CBCT image artifact correction system and method based on convolutional neural network |
Also Published As
Publication number | Publication date |
---|---|
CN109993809B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109993809A (en) | Rapid magnetic resonance imaging method based on residual error U-net convolutional neural networks | |
CN110151181B (en) | Rapid magnetic resonance imaging method based on recursive residual U-shaped network | |
CN111028306B (en) | AR2U-Net neural network-based rapid magnetic resonance imaging method | |
Sandino et al. | Compressed sensing: From research to clinical practice with deep neural networks: Shortening scan times for magnetic resonance imaging | |
Lee et al. | Deep residual learning for accelerated MRI using magnitude and phase networks | |
Tezcan et al. | MR image reconstruction using deep density priors | |
Pezzotti et al. | An adaptive intelligence algorithm for undersampled knee MRI reconstruction | |
CN111123183B (en) | Rapid magnetic resonance imaging method based on complex R2U _ Net network | |
CN108335339A (en) | A kind of magnetic resonance reconstruction method based on deep learning and convex set projection | |
CN109360152A (en) | 3 d medical images super resolution ratio reconstruction method based on dense convolutional neural networks | |
CN104013403B (en) | A kind of three-dimensional cardiac MR imaging method based on resolution of tensor sparse constraint | |
CN102590773B (en) | Magnetic resonance imaging method and system | |
CN112446873A (en) | Method for removing image artifacts | |
CN109003229A (en) | Magnetic resonance super resolution ratio reconstruction method based on three-dimensional enhancing depth residual error network | |
CN114119791A (en) | MRI (magnetic resonance imaging) undersampled image reconstruction method based on cross-domain iterative network | |
Ravishankar et al. | Physics-driven deep training of dictionary-based algorithms for MR image reconstruction | |
CN113971706A (en) | Rapid magnetic resonance intelligent imaging method | |
Pezzotti et al. | An adaptive intelligence algorithm for undersampled knee mri reconstruction: Application to the 2019 fastmri challenge | |
CN112734869A (en) | Rapid magnetic resonance imaging method based on sparse complex U-shaped network | |
CN105678822A (en) | Three-regular magnetic resonance image reconstruction method based on Split Bregman iteration | |
CN106093814A (en) | A kind of cardiac magnetic resonance imaging method based on multiple dimensioned low-rank model | |
Hou et al. | PNCS: Pixel-level non-local method based compressed sensing undersampled MRI image reconstruction | |
Yakkundi et al. | Convolutional LSTM: A deep learning approach for dynamic MRI reconstruction | |
CN113509165B (en) | Complex rapid magnetic resonance imaging method based on CAR2UNet network | |
CN114913262A (en) | Nuclear magnetic resonance imaging method and system based on joint optimization of sampling mode and reconstruction algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |