CN111161370A - Human body multi-core DWI joint reconstruction method based on AI - Google Patents

Human body multi-core DWI joint reconstruction method based on AI Download PDF

Info

Publication number
CN111161370A
CN111161370A CN201911400857.1A CN201911400857A CN111161370A CN 111161370 A CN111161370 A CN 111161370A CN 201911400857 A CN201911400857 A CN 201911400857A CN 111161370 A CN111161370 A CN 111161370A
Authority
CN
China
Prior art keywords
dwi
image
core
model
human body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911400857.1A
Other languages
Chinese (zh)
Other versions
CN111161370B (en
Inventor
周欣
段曹辉
邓鹤
娄昕
孙献平
叶朝辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Precision Measurement Science and Technology Innovation of CAS
Original Assignee
Wuhan Institute of Physics and Mathematics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Institute of Physics and Mathematics of CAS filed Critical Wuhan Institute of Physics and Mathematics of CAS
Priority to CN201911400857.1A priority Critical patent/CN111161370B/en
Publication of CN111161370A publication Critical patent/CN111161370A/en
Application granted granted Critical
Publication of CN111161370B publication Critical patent/CN111161370B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a human body multi-core DWI joint reconstruction method based on AI, which is used for establishing a human body multi-core DWI image training set; establishing a human body multi-core DWI combined reconstruction model; defining a loss function of a human multi-core DWI joint reconstruction model; training a human body multi-core DWI combined reconstruction model by adopting a gradient descent algorithm; inputting a new undersampled DWI image into the trained human body multi-core DWI combined reconstruction model, and obtaining the image with different contents through forward propagation of the modelbThe final reconstructed image of values. The invention can obtain high-quality reconstructed images under high acceleration multiple and has high reconstruction speed.

Description

Human body multi-core DWI joint reconstruction method based on AI
Technical Field
The invention relates to the technical field of multi-nuclear Magnetic Resonance Imaging (MRI), Artificial Intelligence (AI), deep learning, undersampled reconstruction and the like, in particular to a method for realizing the multi-nuclear Magnetic Resonance Imaging (MRI), the Artificial Intelligence (AI), the deep learning and the undersampled reconstructionA human body multi-core DWI joint reconstruction method based on AI, suitable for accelerating human body multi-core (such as129Xe、3He, etc.) the imaging speed of DWI, or more data can be obtained in the same time.
Background
Multinuclear MRI can provide abundant physiological and pathological information, such as hyperpolarized gas (c:129Xe、3he) pulmonary MRI can provide high resolution structural and functional images of the lungs. In particular, hyperpolarized gas pulmonary DWI can sensitively assess structural and functional changes associated with pulmonary disease. In combination with the gas diffusion theoretical model, the multi-b-value DWI can non-invasively and quantitatively obtain lung morphological parameters of the alveolar level, such as the alveolar lung airway inner diameter (R), the airway outer diameter (R), the alveolar pulmonary depth (h), the mean linear intercept (L), and the mean linear intercept (L)m) Surface-to-volume ratio (SVR). However, multiple b-value DWIs require longer acquisition times. For example, acquiring a set of low resolution DWI data (4 slices, 5 b-values, resolution 64 × 64) requires a breath-hold time of approximately 18s, and acquiring a set of 3D whole lung DWI data (10-15mm slice thickness) requires more than 1 min. Although studies have been made to acquire multiple b-value DWI data using a multi-breath approach, multiple breaths can result in differences in lung volume, longer acquisition times, and higher gas costs. Therefore, the DWI imaging speed needs to be accelerated, and a single breath-holding multi-b-value DWI imaging method needs to be developed.
Compressed sensing-based MRI (CS-MRI) speeds up imaging by undersampling k-space without the need for additional hardware and sequences. Chan et al applied CS-MRI to 3D multi-b-value DWI, enabling single breath-hold whole lung morphological parameter measurements [ Chanet al.MagnReson Med,2017,77: 1916-.]. Abascal et al undersampled DWI data in the spatial and diffusion directions and combined with a priori knowledge reconstruction of signal attenuation, obtain an acceleration multiple of 7 to 10 times, and significantly shorten the Imaging time of multi-b-value DWI [ Abascalat al. IEEE Trans Med Imaging,2018,37: 547-.]Westcott et al further applied the method to high resolution hyperpolarization3He multiple b value DWI [ Westcottet almaging,2019,49:1713-1722]. However, there are some limitations to the CS-MRI technique. The nonlinear reconstruction algorithm of CS-MRI relates to iterative computation, and needs longer reconstruction time, for example, in the research of Westcott et al, 2-3 min is needed for reconstructing a layer of DWI image, and the requirement of clinical real-time reconstruction is difficult to meet. In addition, the selection of the hyperparameter of CS-MRI is difficult, and the improper hyperparameter can cause the over-smooth reconstruction result or the residual undersampling artifact.
More recently, AI has been applied in the field of MRI undersampling reconstruction. AI-based MRI reconstruction uses a deep Convolutional Neural Network (CNN) to extract abstract feature representations, learning the nonlinear mapping relationship between undersampled images and fully sampled images through a large amount of training data. Compared with CS-MRI, the AI-based MRI reconstruction has more remarkable advantages in the aspects of reconstruction speed, image quality, acceleration multiple and the like. However, because the hyperpolarized DWI image has the characteristics of low signal-to-noise ratio, insufficient training set and the like, the application of AI to the hyperpolarized DWI reconstruction field has not been studied at present.
Compared with other MRI imaging modalities (T1, T2, etc.), DWI images are multi-channel data composed of different b-value images, and have not only spatial sparsity but also low rank in the direction of diffusion gradient. Wang et al propose a combined denoising CNN model, which improves the denoising effect of DWI images by cascading high-level features of different b-value images [ Wanget al.JMagnReson Imaging,2019,50: 1937-. Xiang et al propose a multi-modal fusion method that reconstructs the undersampled T2 weighted image [ Xianget al. ieee Trans Biomed Eng,2018,66:2105-2114] using the complementary information of the T1 weighted image. Similarly, if the data redundancy in the hyperpolarized multi-b value DWI space and the diffusion direction is fully utilized, the reconstruction quality of the DWI image is further improved.
Based on the analysis, the invention provides a human body multi-core DWI joint reconstruction method based on AI. The method utilizes a CNN model to learn the nonlinear mapping relation between an undersampled image and a fully sampled image, and simultaneously, data redundancy in DWI space and diffusion gradient directions is mined through combined reconstruction, so that the reconstruction effect is improved. Compared with CS-MRI, the method has better image reconstruction effect and faster image reconstruction speed under high acceleration multiple (more than or equal to 4 times).
Disclosure of Invention
The invention aims to provide a human multi-core DWI joint reconstruction method based on AI aiming at the defects and shortcomings of the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
a human body multi-core DWI joint reconstruction method based on AI comprises the following steps:
step 1, establishing a human body multi-kernel DWI image training set, wherein the human body multi-kernel DWI image training set comprises an undersampled DWI image y and a full-sampling DWI image x.
Step 1.1, acquiring a fully sampled DWI image x with a diffusion sensitivity factor b value of 0 from a magnetic resonance imagerb
And 1.2, generating a full sampling DWI image x. Using DWI images x with a diffusion sensitivity factor b value of 0bDWI signal diffusion model, DWI image x of each b valueb. DWI image x of individual b-valuesbAre combined into a fully sampled DWI image x.
The DWI signal diffusion model is:
Figure BDA0002347416030000031
wherein b is a diffusion sensitive factor, DL、DTRespectively longitudinal diffusion coefficient and transverse diffusion coefficient, phi is error function, x0DWI images with a diffusion sensitivity factor b value of 0.
And 1.3, establishing a human body multi-core DWI image training set. And generating an undersampling matrix U, and retrospectively undersampling the fully sampled DWI image x by using the undersampling matrix U to obtain an undersampled DWI image y. And the undersampled DWI image y and the fully sampled DWI image x form a human multi-kernel DWI image training set.
And 2, establishing a human body multi-core DWI combined reconstruction model. The human body multi-core DWI combined reconstruction model is represented as G (·, theta), the input of the model is represented, theta is a model parameter, and the output of the human body multi-core DWI combined reconstruction model is a final weightImage construction
Figure BDA0002347416030000032
The human multi-core DWI combined reconstruction model is a CNN model.
The human multi-core DWI combined reconstruction model comprises a residual dense module (RDB) and a Data Consistency (DC) layer, wherein the residual dense module comprises three parts which are respectively a feature extraction layer, a dense module and a reconstruction layer containing residual connection,
the feature extraction layer extracts features from the model input to generate a first feature map and inputs the first feature map to the dense module. And the dense module further extracts the features of the first feature map to obtain a second feature map, and inputs the second feature map to a reconstruction layer containing residual connection. The reconstruction layer containing residual connection synthesizes the second characteristic graph into a residual image, and then the residual image is processed by using the residual connection to obtain a primary reconstruction image xc. Will preliminarily reconstruct the image xcObtaining a final reconstructed image by an input data consistency layer
Figure BDA0002347416030000033
Figure BDA0002347416030000034
Will preliminarily reconstruct the image xcInput data consistency layer obtaining reconstructed image
Figure BDA0002347416030000035
The method comprises the following steps:
the data consistency layer will preliminarily reconstruct the image xcK-space data k substituted into the following formula to obtain data consistencyDCK-space data k for data consistencyDCPerforming inverse Fourier transform to obtain final reconstructed image
Figure BDA0002347416030000041
Figure BDA0002347416030000042
Wherein k isc=Fxc,k0Fy, F is the fourier transform, j is the k-space coordinate, kDC(j) K-space data k for data consistency at jDCThe value of (b), Ω represents the set of k-space coordinates sampled in the undersampled DWI image y.
And 3, defining a loss function L (theta) of the human body multi-core DWI combined reconstruction model G (·, theta).
L(θ)=E[||x-G(y,θ)||l2]+ηE[||Ψ(x)-Ψ(G(y,θ))||l2]
Wherein,
Figure BDA0002347416030000043
representing the desired operation, y is the undersampled DWI image and G (y, θ) is the final reconstructed image
Figure BDA0002347416030000044
||·||l2Denotes the L2 norm, Ψ denotes an estimation function of the Apparent Diffusion Coefficient (ADC), and η is a weighting factor of the apparent diffusion coefficient loss.
The first part of the above equation is the pixel level loss between the fully sampled image and the reconstructed image, and the second part is the apparent diffusion coefficient loss estimated for the fully sampled image and the reconstructed image. The apparent diffusion coefficient extracted from the DWI image has important physiological significance, so that the apparent diffusion coefficient loss is added into the loss function, and the estimation accuracy of the apparent diffusion coefficient of the reconstructed image is improved.
And 4, training a human body multi-core DWI combined reconstruction model. Training a human body multi-core DWI combined reconstruction model by adopting a gradient descent algorithm, and searching a model parameter theta which enables a loss function L (theta) to be minimum, wherein the model parameter theta which enables the loss function L (theta) to be minimum is
Figure BDA0002347416030000045
Figure BDA0002347416030000046
And 5, performing combined reconstruction on the multi-core DWI image of the target human body. To give
Figure BDA0002347416030000047
Inputting a new undersampled DWI image y, and obtaining a final reconstructed image containing different b values through forward propagation of the model
Figure BDA0002347416030000048
Compared with the prior art, the invention has the following advantages:
under the condition of high acceleration multiple (more than or equal to 4 times), the method can remove undersampling artifacts, recover detailed information of DWI images and improve the imaging speed of human multi-core DWI; the reconstruction speed is high, only the forward propagation of the CNN model is needed, and the reconstruction time reaches ms magnitude; parameters do not need to be adjusted, and the method is more convenient and fast in practical application; the structural similarity of different b-value images is jointly reconstructed and mined, and the reconstruction effect is improved; a data consistency layer is added into the CNN model to ensure the consistency of the final reconstructed image and the undersampled data; and apparent diffusion coefficient loss is added into the loss function, so that the accuracy of the estimation of the apparent diffusion coefficient is improved.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2A is a set of fully sampled hyperpolarizations129A Xe pulmonary DWI image, a fully sampled DWI image containing 5 b values;
FIG. 2B is an undersampled DWI image under quadruple undersampling;
FIG. 2C is the final reconstructed image of the conventional CS-MRI method under four times undersampling;
fig. 2D is a final reconstructed image obtained by using the method of embodiment 1 of the present invention under four times undersampling.
Detailed Description
The present invention will be described in further detail with reference to examples for the purpose of facilitating understanding and practice of the invention by those of ordinary skill in the art, and it is to be understood that the present invention has been described in the illustrative embodiments and is not to be construed as limited thereto.
Example 1:
a human body multi-core DWI joint reconstruction method based on AI comprises the following steps:
step 1, constructing a human multi-kernel DWI image training set. In the embodiment, the multi-core DWI of the human body is hyperpolarized129Xe lung DWI, human multi-nuclear DWI image training set is hyperpolarized129Xe lung DWI image training set.
Step 1.1, obtaining fully sampled hyperpolarization from a magnetic resonance imager129Xe pulmonary ventilation images. Hyperpolarization of a full sample collected from 105 volunteers129Xe pulmonary ventilation images. Fully sampled hyperpolarization129Xe pulmonary ventilation images were acquired using a 3D bSSFP sequence with a sampling matrix size of 96X 84, layer thickness of 8mm, and number of layers of 24. Selecting the image with signal-to-noise ratio greater than 6.6 to obtain 1404 total sampled hyperpolarized images129Xe pulmonary ventilation images. Hyperpolarization of full samples129The Xe pulmonary ventilation images were preprocessed and the image size was transformed to 64 x 64. Hyperpolarization of full samples after image size conversion129Xe pulmonary ventilation image as DWI image x with diffusion sensitive factor b value of 0b
And 1.2, generating a full sampling DWI image x. Using DWI images x with a diffusion sensitivity factor b value of 0bAnd a DWI signal diffusion model for generating a DWI image x with a diffusion sensitivity factor b value different from 0b. In hyperpolarization129In Xe lung DWI, the DWI signal diffusion model is a cylinder model (Sukstanski AL et AL. magnetic Resonance in Medicine,2012,67:856-,
Figure BDA0002347416030000061
Figure BDA0002347416030000062
wherein x is0A DWI image with a b-value of 0, b being a diffusion sensitive factor, in this embodiment b-values include 0, 10, 20, 30, 40s/cm2。DL、DTRespectively longitudinal diffusion coefficient and transverse diffusion coefficient, phi is an error function. D0=0.14cm2S is of129The diffusion coefficient of Xe in a gas mixture. Δ is a diffusion time, and in the present embodiment Δ is 5 ms. R and R are random parameters, FLAnd FTAre all empirical expressions, FLAnd FTHas been derived from Sukstanskii (Sukstanskit al. magnetic response in Medicine,2012,67: 856-. Randomly generating an R value within a range of R values corresponding to the real lung, and an R value within a range of R values corresponding to the real lung: the range of R values for the real lung is (360 + -60) μm, and the range of R values for the real lung is (160 + -30) μm. Using the equation (1), b is 10, 20, 30, 40s/cm2DWI image x ofb. Finally, DWI image x of each b valuebA DWI image composed of a set of multiple channels, as a fully sampled DWI image x: x ═ x0,x10,…,x40]. The size of the fully sampled DWI image x is 64 × 64 × 5.
And 1.3, establishing a human body multi-core DWI image training set. An undersampled matrix U is generated at a sampling rate of 1/4 and an undersampled DWI image y is obtained by retrospectively undersampling the fully sampled DWI image x with the undersampled matrix U, as shown in fig. 1. Similarly, y ═ y0,y10,…,y40]. And the undersampled DWI image y and the fully sampled DWI image x form a human multi-kernel DWI image training set.
And 2, establishing a human body multi-core DWI combined reconstruction model. The human multi-core DWI joint reconstruction model is represented as G (·, theta), representing model input, and theta is a model parameter. Since the undersampled DWI image y is a complex-valued image, the real part and the imaginary part of the undersampled DWI image y are respectively taken as different channels in the embodiment, and thus the size of the model input of the human multi-kernel DWI joint reconstruction model is 64 × 64 × 10. The human multi-core DWI combined reconstruction model comprises a residual dense module and a data consistency layer. The undersampled DWI image y shares the characteristics in the residual dense module, so that the data redundancy of the DWI image on the space and diffusion gradient method can be fully mined, and the reconstruction effect is improved. The residual dense module comprises three parts, which are respectively specialThe system comprises a sign extraction layer, a dense module and a reconstruction layer containing residual connection. The feature extraction layer extracts features from the undersampled DWI image y using a 3 × 3 convolution to generate a first feature map and inputs the first feature map to the dense module. The dense module further extracts the features of the first feature map to obtain a second feature map, inputs the second feature map into a reconstruction layer containing residual connection, and fully utilizes the hierarchical features of all convolution layers to avoid the problems of information loss and gradient disappearance between convolution layers. The reconstruction layer containing residual error connection synthesizes the second characteristic graph into a residual error image by using convolution of 1 multiplied by 1, and then the residual error image is processed by using the residual error connection to obtain a primary reconstruction image xc. Will preliminarily reconstruct the image xcInput data consistency layer obtaining reconstructed image
Figure BDA0002347416030000071
Figure BDA0002347416030000072
FHIs an inverse fourier transform. The specific operations of the data consistency layer comprise: the data consistency layer will preliminarily reconstruct the image xcSubstitution into equation (3) to obtain k-space data k of data consistencyDCK-space data k for data consistencyDCPerforming inverse Fourier transform to obtain final reconstructed image
Figure BDA0002347416030000073
In a similar manner to that described above,
Figure BDA0002347416030000074
the human multi-core DWI combined reconstruction model can be built by using a deep learning tool kit TensorFlow in a computer application software Python 3.6 environment.
Figure BDA0002347416030000075
Wherein k isc=Fxc,k0Fy, F is the fourier transform, j is the k-space coordinate, kDC(j) K-space data for data consistency at jkDCThe value of (b), Ω represents the set of k-space coordinates sampled in the undersampled DWI image y.
And 3, defining a loss function. The loss function L (theta) of the human multi-kernel DWI joint reconstruction model G (·, theta) includes pixel-level loss and apparent diffusion coefficient loss:
L(θ)=E[||x-G(y,θ)||l2]+ηE[||Ψ(x)-Ψ(G(y,θ))||l2]formula (4)
Wherein
Figure BDA0002347416030000076
Representing the desired operation, y is the undersampled DWI image and G (y, θ) is the final reconstructed image
Figure BDA0002347416030000077
||·||l2Expressing the norm of L2, Ψ represents the estimation function of the apparent diffusion coefficient, and η ═ 0.001 is the weighting factor of the apparent diffusion coefficient loss.
And 4, training a human body multi-core DWI combined reconstruction model. The Adam algorithm [ Kingma, et al. arXivpreprint,2014, arXiv:1412.6980 was used.]Training a human body multi-core DWI combined reconstruction model, and searching for a model parameter theta which enables a loss function L (theta) to be minimum, wherein the model parameter theta which enables the loss function L (theta) to be minimum is
Figure BDA0002347416030000078
Namely, the following conditions are satisfied:
Figure BDA0002347416030000079
the learning rate of the Adam algorithm is 0.0002, the first order momentum is set to 0.9, and the second order momentum is set to 0.999. After training is finished, the model parameter theta of the human multi-core DWI combined reconstruction model is fixed
Figure BDA0002347416030000081
Figure BDA0002347416030000082
Can be used to reconstruct new hyperpolarizations129Xe pulmonary DWI images.
And 5, performing combined reconstruction on the multi-core DWI image of the target human body. To give
Figure BDA0002347416030000083
Inputting a new undersampled DWI image y (shown in FIG. 2B), and performing forward propagation on the model to obtain a final reconstructed image containing different B values
Figure BDA0002347416030000084
Figure BDA0002347416030000085
In a similar manner to that described above,
Figure BDA0002347416030000086
fig. 2A is a full sampling image, which includes 5 b values (b is 0, 10, 20, 30, 40 s/cm)2) Hyperpolarization of129Xe pulmonary DWI images. Fig. 2B is an undersampled DWI image y at a sampling rate of 1/4, which has lost most of the structural and detail information and contains severe undersampling artifacts. Although the conventional CS-MRI method can recover part of the structural information, fig. 2C contains a part of the artifact and a significant smoothing effect. As shown in FIG. 2D, the method of the present invention successfully removes the undersampling artifacts and accurately recovers the hyperpolarized DWI image structure and detail information. In addition, the method only needs the forward propagation of the CNN model, and the reconstruction speed is high.
The specific embodiments described herein are merely illustrative of the invention. The AI method in the present invention is not limited to CNN, and may include Recurrent Neural Network (RNN) and the like. The multi-core DWI in the present invention is not limited to the embodiments129Xe, may also be3He、19F, etc., the present invention is also applicable to conventional ones1Under-sampled reconstruction of the HDWI. The CNN model is not limited to RDB, and can be a residual error network, U-Net and the like. The CNN model training method is not limited to Adam, and also comprises an optimization algorithm commonly used in deep learning such as RMSProp.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (5)

1. A human body multi-core DWI joint reconstruction method based on AI is characterized by comprising the following steps:
step 1, establishing a human body multi-kernel DWI image training set which comprises an undersampled DWI image y and a full-sampling DWI image x,
step 2, establishing a human body multi-core DWI combined reconstruction model, wherein the human body multi-core DWI combined reconstruction model is expressed as G (·, theta), representing model input, theta is a model parameter,
step 3, defining a loss function L (theta) of the human body multi-core DWI combined reconstruction model G (·, theta)
L(θ)=E[||x-G(y,θ)||l2]+ηE[||Ψ(x)-Ψ(G(y,θ))||l2]
Wherein,
Figure FDA0002347416020000011
representing the desired operation, y is an under-sampled DWI image, | | · | | luminancel2Representing the norm of L2, Ψ representing the estimation function of the apparent diffusion coefficient, η is the weighting factor for the apparent diffusion coefficient loss,
step 4, training the human body multi-core DWI combined reconstruction model by adopting a gradient descent algorithm, and searching for a model parameter theta which enables the loss function L (theta) to be minimum, wherein the model parameter theta which enables the loss function L (theta) to be minimum is
Figure FDA0002347416020000012
Step 5, supply
Figure FDA0002347416020000013
Inputting a new undersampled DWI image y, and obtaining a final reconstructed image containing different b values through forward propagation of the model
Figure FDA0002347416020000014
2. An AI-based human multi-nuclear DWI joint reconstruction method according to claim 1, characterized in that the step 1 comprises the following steps:
step 1.1, acquiring a fully sampled DWI image x with a diffusion sensitivity factor b value of 0 from a magnetic resonance imagerb
Step 1.2, generating a full sampling DWI image x, and utilizing the DWI image x with the diffusion sensitivity factor b value of 0bDWI signal diffusion model, DWI image x of each b valuebDWI image x of individual b-valuesbAre combined into a fully sampled DWI image x,
step 1.3, a human body multi-core DWI image training set is established, an undersampling matrix U is generated, retrospective undersampling is carried out on the full-sampling DWI image x by using the undersampling matrix U, and an undersampled DWI image y is obtained. And the undersampled DWI image y and the fully sampled DWI image x form a human multi-kernel DWI image training set.
3. An AI-based human multi-nuclear DWI joint reconstruction method according to claim 2, characterized in that in step 1.2, the DWI signal diffusion model is:
Figure FDA0002347416020000021
wherein b is a diffusion sensitive factor, DL、DTRespectively longitudinal diffusion coefficient and transverse diffusion coefficient, phi is error function, x0DWI images with a diffusion sensitivity factor b value of 0.
4. The AI-based human multi-nuclear DWI joint reconstruction method of claim 1, wherein the human multi-nuclear DWI joint reconstruction model comprises a residual dense module and a data consistency layer, the residual dense module comprises three parts, namely a feature extraction layer, a dense module and a reconstruction layer comprising residual connection,
the feature extraction layer extracts features from the model input to generate a first feature map and inputs the first feature map into the dense module, the dense module further extracts the features of the first feature map to obtain a second feature map, inputs the second feature map into the reconstruction layer containing residual connection, the reconstruction layer containing the residual connection synthesizes the second feature map into a residual image, and then uses the residual connection to process the residual image to obtain a primary reconstructed image xcWill preliminarily reconstruct the image xcObtaining a final reconstructed image by an input data consistency layer
Figure FDA0002347416020000024
5. The AI-based human multi-nuclear DWI joint reconstruction method of claim 4, wherein the preliminary reconstructed image x iscObtaining a final reconstructed image by an input data consistency layer
Figure FDA0002347416020000025
The method comprises the following steps:
the data consistency layer will preliminarily reconstruct the image xcK-space data k substituted into the following formula to obtain data consistencyDCK-space data k for data consistencyDCPerforming inverse Fourier transform to obtain final reconstructed image
Figure FDA0002347416020000022
Figure FDA0002347416020000023
Wherein k isc=Fxc,k0Fy, F is the fourier transform, j is the k-space coordinate, kDC(j) K-space data k for data consistency at jDCThe value of (b), Ω represents the set of k-space coordinates sampled in the undersampled DWI image y.
CN201911400857.1A 2019-12-30 2019-12-30 Human body multi-core DWI joint reconstruction method based on AI Active CN111161370B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911400857.1A CN111161370B (en) 2019-12-30 2019-12-30 Human body multi-core DWI joint reconstruction method based on AI

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911400857.1A CN111161370B (en) 2019-12-30 2019-12-30 Human body multi-core DWI joint reconstruction method based on AI

Publications (2)

Publication Number Publication Date
CN111161370A true CN111161370A (en) 2020-05-15
CN111161370B CN111161370B (en) 2021-10-29

Family

ID=70559371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911400857.1A Active CN111161370B (en) 2019-12-30 2019-12-30 Human body multi-core DWI joint reconstruction method based on AI

Country Status (1)

Country Link
CN (1) CN111161370B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112085197A (en) * 2020-09-11 2020-12-15 推想医疗科技股份有限公司 Neural network model training method and device, storage medium and electronic equipment
CN113066145A (en) * 2021-04-29 2021-07-02 武汉聚垒科技有限公司 Rapid whole-body diffusion weighted imaging method based on deep learning and related equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102309328A (en) * 2011-10-19 2012-01-11 中国科学院深圳先进技术研究院 Diffusion-tensor imaging method and system
CN106249183A (en) * 2016-09-24 2016-12-21 中国科学院武汉物理与数学研究所 A kind of hyperpolarization xenon magnetic resonance method based on spectrum picture integration
CN108717717A (en) * 2018-04-23 2018-10-30 东南大学 The method rebuild based on the sparse MRI that convolutional neural networks and alternative manner are combined
CN109256023A (en) * 2018-11-28 2019-01-22 中国科学院武汉物理与数学研究所 A kind of measurement method of pulmonary airways microstructure model
CN109410289A (en) * 2018-11-09 2019-03-01 中国科学院武汉物理与数学研究所 A kind of high lack sampling hyperpolarized gas lung MRI method for reconstructing of deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102309328A (en) * 2011-10-19 2012-01-11 中国科学院深圳先进技术研究院 Diffusion-tensor imaging method and system
CN106249183A (en) * 2016-09-24 2016-12-21 中国科学院武汉物理与数学研究所 A kind of hyperpolarization xenon magnetic resonance method based on spectrum picture integration
CN108717717A (en) * 2018-04-23 2018-10-30 东南大学 The method rebuild based on the sparse MRI that convolutional neural networks and alternative manner are combined
CN109410289A (en) * 2018-11-09 2019-03-01 中国科学院武汉物理与数学研究所 A kind of high lack sampling hyperpolarized gas lung MRI method for reconstructing of deep learning
CN109256023A (en) * 2018-11-28 2019-01-22 中国科学院武汉物理与数学研究所 A kind of measurement method of pulmonary airways microstructure model

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
JIANPING ZHONG等: "Simultaneous assessment of both lung morphometry and gas exchange function within a single breath‐hold by hyperpolarized 129Xe MRI", 《NMR IN BIOMEDICINE》 *
RONGZHAO ZHANG: "Automatic Segmentation of Acute Ischemic Stroke From DWI Using 3-D Fully Convolutional DenseNets", 《IEEE TRANSACTIONS ON MEDICAL IMAGING》 *
张会婷: "超极化~(129)Xe扩散加权MRI方法及其对肺部疾病的研究", 《中国博士学位论文全文数据库(医药卫生科技辑)》 *
王平等: "基于3D深度残差网络与级联U-Net的缺血性脑卒中病灶分割算法", 《计算机应用》 *
王科等: "基于非高斯分布模型的扩散加权成像在体部疾病中的应用", 《磁共振成像》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112085197A (en) * 2020-09-11 2020-12-15 推想医疗科技股份有限公司 Neural network model training method and device, storage medium and electronic equipment
CN112085197B (en) * 2020-09-11 2022-07-22 推想医疗科技股份有限公司 Neural network model training method and device, storage medium and electronic equipment
CN113066145A (en) * 2021-04-29 2021-07-02 武汉聚垒科技有限公司 Rapid whole-body diffusion weighted imaging method based on deep learning and related equipment
CN113066145B (en) * 2021-04-29 2023-12-26 武汉聚垒科技有限公司 Deep learning-based rapid whole-body diffusion weighted imaging method and related equipment

Also Published As

Publication number Publication date
CN111161370B (en) 2021-10-29

Similar Documents

Publication Publication Date Title
Sandino et al. Accelerating cardiac cine MRI using a deep learning‐based ESPIRiT reconstruction
CN106780372B (en) A kind of weight nuclear norm magnetic resonance imaging method for reconstructing sparse based on Generalized Tree
Poddar et al. Dynamic MRI using smoothness regularization on manifolds (SToRM)
Usman et al. Motion corrected compressed sensing for free‐breathing dynamic cardiac MRI
Hamilton et al. Machine learning for rapid magnetic resonance fingerprinting tissue property quantification
CN106997034B (en) Based on the magnetic resonance diffusion imaging method rebuild using Gauss model as example integration
US11170543B2 (en) MRI image reconstruction from undersampled data using adversarially trained generative neural network
CN108010094B (en) Magnetic resonance image reconstruction method and device
CN104013403B (en) A kind of three-dimensional cardiac MR imaging method based on resolution of tensor sparse constraint
CN111161370B (en) Human body multi-core DWI joint reconstruction method based on AI
CN110942496A (en) Propeller sampling and neural network-based magnetic resonance image reconstruction method and system
CN111784792A (en) Rapid magnetic resonance reconstruction system based on double-domain convolution neural network and training method and application thereof
Lingala et al. Accelerated first pass cardiac perfusion MRI using improved k− t SLR
CN105678822B (en) A kind of three canonical magnetic resonance image reconstructing methods based on Split Bregman iteration
Wang et al. Accelerating MR imaging via deep Chambolle-Pock network
Kleineisel et al. Real‐time cardiac MRI using an undersampled spiral k‐space trajectory and a reconstruction based on a variational network
Hou et al. Pncs: Pixel-level non-local method based compressed sensing undersampled mri image reconstruction
CN112489150B (en) Multi-scale sequential training method of deep neural network for rapid MRI
CN103728581B (en) Based on the SPEED rapid magnetic resonance imaging method of discrete cosine transform
Gan et al. SS-JIRCS: Self-supervised joint image reconstruction and coil sensitivity calibration in parallel MRI without ground truth
CN113866694B (en) Rapid three-dimensional magnetic resonance T1 quantitative imaging method, system and medium
CN107209242A (en) Method and system for the MR images of the object that generates the movement in its environment
CN113506258B (en) Under-sampling lung gas MRI reconstruction method for multitask complex value deep learning
CN114004764B (en) Improved sensitivity coding reconstruction method based on sparse transform learning
CN105488757A (en) Brain fiber sparse reconstruction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211011

Address after: 430071 Xiao Hong, Wuchang District, Wuhan District, Hubei, Shanxi, 30

Applicant after: Institute of precision measurement science and technology innovation, Chinese Academy of Sciences

Address before: 430071 Xiao Hong, Wuchang District, Wuhan District, Hubei, Shanxi, 30

Applicant before: WUHAN INSTITUTE OF PHYSICS AND MATHEMATICS, CHINESE ACADEMY OF SCIENCES

GR01 Patent grant
GR01 Patent grant