CN111161370A - Human body multi-core DWI joint reconstruction method based on AI - Google Patents

Human body multi-core DWI joint reconstruction method based on AI Download PDF

Info

Publication number
CN111161370A
CN111161370A CN201911400857.1A CN201911400857A CN111161370A CN 111161370 A CN111161370 A CN 111161370A CN 201911400857 A CN201911400857 A CN 201911400857A CN 111161370 A CN111161370 A CN 111161370A
Authority
CN
China
Prior art keywords
dwi
image
core
human body
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911400857.1A
Other languages
Chinese (zh)
Other versions
CN111161370B (en
Inventor
周欣
段曹辉
邓鹤
娄昕
孙献平
叶朝辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Precision Measurement Science and Technology Innovation of CAS
Original Assignee
Wuhan Institute of Physics and Mathematics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Institute of Physics and Mathematics of CAS filed Critical Wuhan Institute of Physics and Mathematics of CAS
Priority to CN201911400857.1A priority Critical patent/CN111161370B/en
Publication of CN111161370A publication Critical patent/CN111161370A/en
Application granted granted Critical
Publication of CN111161370B publication Critical patent/CN111161370B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

本发明公开了一种基于AI的人体多核DWI联合重建方法,建立人体多核DWI图像训练集;建立人体多核DWI联合重建模型;定义人体多核DWI联合重建模型的损失函数;采用梯度下降算法训练人体多核DWI联合重建模型;给训练后的人体多核DWI联合重建模型输入新的欠采样DWI图像,经过模型正向传播,即可得到包含不同b值的最终重建图像。本发明在高加速倍数下能够得到高质量的重建图像,且重建速度快。

Figure 201911400857

The invention discloses an AI-based human body multi-core DWI joint reconstruction method. The human body multi-core DWI image training set is established; the human body multi-core DWI joint reconstruction model is established; the loss function of the human body multi-core DWI joint reconstruction model is defined; the gradient descent algorithm is used to train the human body multi-core DWI joint reconstruction model; input new undersampled DWI images to the trained human body multi-core DWI joint reconstruction model, and through the forward propagation of the model, the final reconstructed images containing different b values can be obtained. The present invention can obtain high-quality reconstructed images under high acceleration times, and the reconstruction speed is fast.

Figure 201911400857

Description

Human body multi-core DWI joint reconstruction method based on AI
Technical Field
The invention relates to the technical field of multi-nuclear Magnetic Resonance Imaging (MRI), Artificial Intelligence (AI), deep learning, undersampled reconstruction and the like, in particular to a method for realizing the multi-nuclear Magnetic Resonance Imaging (MRI), the Artificial Intelligence (AI), the deep learning and the undersampled reconstructionA human body multi-core DWI joint reconstruction method based on AI, suitable for accelerating human body multi-core (such as129Xe、3He, etc.) the imaging speed of DWI, or more data can be obtained in the same time.
Background
Multinuclear MRI can provide abundant physiological and pathological information, such as hyperpolarized gas (c:129Xe、3he) pulmonary MRI can provide high resolution structural and functional images of the lungs. In particular, hyperpolarized gas pulmonary DWI can sensitively assess structural and functional changes associated with pulmonary disease. In combination with the gas diffusion theoretical model, the multi-b-value DWI can non-invasively and quantitatively obtain lung morphological parameters of the alveolar level, such as the alveolar lung airway inner diameter (R), the airway outer diameter (R), the alveolar pulmonary depth (h), the mean linear intercept (L), and the mean linear intercept (L)m) Surface-to-volume ratio (SVR). However, multiple b-value DWIs require longer acquisition times. For example, acquiring a set of low resolution DWI data (4 slices, 5 b-values, resolution 64 × 64) requires a breath-hold time of approximately 18s, and acquiring a set of 3D whole lung DWI data (10-15mm slice thickness) requires more than 1 min. Although studies have been made to acquire multiple b-value DWI data using a multi-breath approach, multiple breaths can result in differences in lung volume, longer acquisition times, and higher gas costs. Therefore, the DWI imaging speed needs to be accelerated, and a single breath-holding multi-b-value DWI imaging method needs to be developed.
Compressed sensing-based MRI (CS-MRI) speeds up imaging by undersampling k-space without the need for additional hardware and sequences. Chan et al applied CS-MRI to 3D multi-b-value DWI, enabling single breath-hold whole lung morphological parameter measurements [ Chanet al.MagnReson Med,2017,77: 1916-.]. Abascal et al undersampled DWI data in the spatial and diffusion directions and combined with a priori knowledge reconstruction of signal attenuation, obtain an acceleration multiple of 7 to 10 times, and significantly shorten the Imaging time of multi-b-value DWI [ Abascalat al. IEEE Trans Med Imaging,2018,37: 547-.]Westcott et al further applied the method to high resolution hyperpolarization3He multiple b value DWI [ Westcottet almaging,2019,49:1713-1722]. However, there are some limitations to the CS-MRI technique. The nonlinear reconstruction algorithm of CS-MRI relates to iterative computation, and needs longer reconstruction time, for example, in the research of Westcott et al, 2-3 min is needed for reconstructing a layer of DWI image, and the requirement of clinical real-time reconstruction is difficult to meet. In addition, the selection of the hyperparameter of CS-MRI is difficult, and the improper hyperparameter can cause the over-smooth reconstruction result or the residual undersampling artifact.
More recently, AI has been applied in the field of MRI undersampling reconstruction. AI-based MRI reconstruction uses a deep Convolutional Neural Network (CNN) to extract abstract feature representations, learning the nonlinear mapping relationship between undersampled images and fully sampled images through a large amount of training data. Compared with CS-MRI, the AI-based MRI reconstruction has more remarkable advantages in the aspects of reconstruction speed, image quality, acceleration multiple and the like. However, because the hyperpolarized DWI image has the characteristics of low signal-to-noise ratio, insufficient training set and the like, the application of AI to the hyperpolarized DWI reconstruction field has not been studied at present.
Compared with other MRI imaging modalities (T1, T2, etc.), DWI images are multi-channel data composed of different b-value images, and have not only spatial sparsity but also low rank in the direction of diffusion gradient. Wang et al propose a combined denoising CNN model, which improves the denoising effect of DWI images by cascading high-level features of different b-value images [ Wanget al.JMagnReson Imaging,2019,50: 1937-. Xiang et al propose a multi-modal fusion method that reconstructs the undersampled T2 weighted image [ Xianget al. ieee Trans Biomed Eng,2018,66:2105-2114] using the complementary information of the T1 weighted image. Similarly, if the data redundancy in the hyperpolarized multi-b value DWI space and the diffusion direction is fully utilized, the reconstruction quality of the DWI image is further improved.
Based on the analysis, the invention provides a human body multi-core DWI joint reconstruction method based on AI. The method utilizes a CNN model to learn the nonlinear mapping relation between an undersampled image and a fully sampled image, and simultaneously, data redundancy in DWI space and diffusion gradient directions is mined through combined reconstruction, so that the reconstruction effect is improved. Compared with CS-MRI, the method has better image reconstruction effect and faster image reconstruction speed under high acceleration multiple (more than or equal to 4 times).
Disclosure of Invention
The invention aims to provide a human multi-core DWI joint reconstruction method based on AI aiming at the defects and shortcomings of the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
a human body multi-core DWI joint reconstruction method based on AI comprises the following steps:
step 1, establishing a human body multi-kernel DWI image training set, wherein the human body multi-kernel DWI image training set comprises an undersampled DWI image y and a full-sampling DWI image x.
Step 1.1, acquiring a fully sampled DWI image x with a diffusion sensitivity factor b value of 0 from a magnetic resonance imagerb
And 1.2, generating a full sampling DWI image x. Using DWI images x with a diffusion sensitivity factor b value of 0bDWI signal diffusion model, DWI image x of each b valueb. DWI image x of individual b-valuesbAre combined into a fully sampled DWI image x.
The DWI signal diffusion model is:
Figure BDA0002347416030000031
wherein b is a diffusion sensitive factor, DL、DTRespectively longitudinal diffusion coefficient and transverse diffusion coefficient, phi is error function, x0DWI images with a diffusion sensitivity factor b value of 0.
And 1.3, establishing a human body multi-core DWI image training set. And generating an undersampling matrix U, and retrospectively undersampling the fully sampled DWI image x by using the undersampling matrix U to obtain an undersampled DWI image y. And the undersampled DWI image y and the fully sampled DWI image x form a human multi-kernel DWI image training set.
And 2, establishing a human body multi-core DWI combined reconstruction model. The human body multi-core DWI combined reconstruction model is represented as G (·, theta), the input of the model is represented, theta is a model parameter, and the output of the human body multi-core DWI combined reconstruction model is a final weightImage construction
Figure BDA0002347416030000032
The human multi-core DWI combined reconstruction model is a CNN model.
The human multi-core DWI combined reconstruction model comprises a residual dense module (RDB) and a Data Consistency (DC) layer, wherein the residual dense module comprises three parts which are respectively a feature extraction layer, a dense module and a reconstruction layer containing residual connection,
the feature extraction layer extracts features from the model input to generate a first feature map and inputs the first feature map to the dense module. And the dense module further extracts the features of the first feature map to obtain a second feature map, and inputs the second feature map to a reconstruction layer containing residual connection. The reconstruction layer containing residual connection synthesizes the second characteristic graph into a residual image, and then the residual image is processed by using the residual connection to obtain a primary reconstruction image xc. Will preliminarily reconstruct the image xcObtaining a final reconstructed image by an input data consistency layer
Figure BDA0002347416030000033
Figure BDA0002347416030000034
Will preliminarily reconstruct the image xcInput data consistency layer obtaining reconstructed image
Figure BDA0002347416030000035
The method comprises the following steps:
the data consistency layer will preliminarily reconstruct the image xcK-space data k substituted into the following formula to obtain data consistencyDCK-space data k for data consistencyDCPerforming inverse Fourier transform to obtain final reconstructed image
Figure BDA0002347416030000041
Figure BDA0002347416030000042
Wherein k isc=Fxc,k0Fy, F is the fourier transform, j is the k-space coordinate, kDC(j) K-space data k for data consistency at jDCThe value of (b), Ω represents the set of k-space coordinates sampled in the undersampled DWI image y.
And 3, defining a loss function L (theta) of the human body multi-core DWI combined reconstruction model G (·, theta).
L(θ)=E[||x-G(y,θ)||l2]+ηE[||Ψ(x)-Ψ(G(y,θ))||l2]
Wherein,
Figure BDA0002347416030000043
representing the desired operation, y is the undersampled DWI image and G (y, θ) is the final reconstructed image
Figure BDA0002347416030000044
||·||l2Denotes the L2 norm, Ψ denotes an estimation function of the Apparent Diffusion Coefficient (ADC), and η is a weighting factor of the apparent diffusion coefficient loss.
The first part of the above equation is the pixel level loss between the fully sampled image and the reconstructed image, and the second part is the apparent diffusion coefficient loss estimated for the fully sampled image and the reconstructed image. The apparent diffusion coefficient extracted from the DWI image has important physiological significance, so that the apparent diffusion coefficient loss is added into the loss function, and the estimation accuracy of the apparent diffusion coefficient of the reconstructed image is improved.
And 4, training a human body multi-core DWI combined reconstruction model. Training a human body multi-core DWI combined reconstruction model by adopting a gradient descent algorithm, and searching a model parameter theta which enables a loss function L (theta) to be minimum, wherein the model parameter theta which enables the loss function L (theta) to be minimum is
Figure BDA0002347416030000045
Figure BDA0002347416030000046
And 5, performing combined reconstruction on the multi-core DWI image of the target human body. To give
Figure BDA0002347416030000047
Inputting a new undersampled DWI image y, and obtaining a final reconstructed image containing different b values through forward propagation of the model
Figure BDA0002347416030000048
Compared with the prior art, the invention has the following advantages:
under the condition of high acceleration multiple (more than or equal to 4 times), the method can remove undersampling artifacts, recover detailed information of DWI images and improve the imaging speed of human multi-core DWI; the reconstruction speed is high, only the forward propagation of the CNN model is needed, and the reconstruction time reaches ms magnitude; parameters do not need to be adjusted, and the method is more convenient and fast in practical application; the structural similarity of different b-value images is jointly reconstructed and mined, and the reconstruction effect is improved; a data consistency layer is added into the CNN model to ensure the consistency of the final reconstructed image and the undersampled data; and apparent diffusion coefficient loss is added into the loss function, so that the accuracy of the estimation of the apparent diffusion coefficient is improved.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2A is a set of fully sampled hyperpolarizations129A Xe pulmonary DWI image, a fully sampled DWI image containing 5 b values;
FIG. 2B is an undersampled DWI image under quadruple undersampling;
FIG. 2C is the final reconstructed image of the conventional CS-MRI method under four times undersampling;
fig. 2D is a final reconstructed image obtained by using the method of embodiment 1 of the present invention under four times undersampling.
Detailed Description
The present invention will be described in further detail with reference to examples for the purpose of facilitating understanding and practice of the invention by those of ordinary skill in the art, and it is to be understood that the present invention has been described in the illustrative embodiments and is not to be construed as limited thereto.
Example 1:
a human body multi-core DWI joint reconstruction method based on AI comprises the following steps:
step 1, constructing a human multi-kernel DWI image training set. In the embodiment, the multi-core DWI of the human body is hyperpolarized129Xe lung DWI, human multi-nuclear DWI image training set is hyperpolarized129Xe lung DWI image training set.
Step 1.1, obtaining fully sampled hyperpolarization from a magnetic resonance imager129Xe pulmonary ventilation images. Hyperpolarization of a full sample collected from 105 volunteers129Xe pulmonary ventilation images. Fully sampled hyperpolarization129Xe pulmonary ventilation images were acquired using a 3D bSSFP sequence with a sampling matrix size of 96X 84, layer thickness of 8mm, and number of layers of 24. Selecting the image with signal-to-noise ratio greater than 6.6 to obtain 1404 total sampled hyperpolarized images129Xe pulmonary ventilation images. Hyperpolarization of full samples129The Xe pulmonary ventilation images were preprocessed and the image size was transformed to 64 x 64. Hyperpolarization of full samples after image size conversion129Xe pulmonary ventilation image as DWI image x with diffusion sensitive factor b value of 0b
And 1.2, generating a full sampling DWI image x. Using DWI images x with a diffusion sensitivity factor b value of 0bAnd a DWI signal diffusion model for generating a DWI image x with a diffusion sensitivity factor b value different from 0b. In hyperpolarization129In Xe lung DWI, the DWI signal diffusion model is a cylinder model (Sukstanski AL et AL. magnetic Resonance in Medicine,2012,67:856-,
Figure BDA0002347416030000061
Figure BDA0002347416030000062
wherein x is0A DWI image with a b-value of 0, b being a diffusion sensitive factor, in this embodiment b-values include 0, 10, 20, 30, 40s/cm2。DL、DTRespectively longitudinal diffusion coefficient and transverse diffusion coefficient, phi is an error function. D0=0.14cm2S is of129The diffusion coefficient of Xe in a gas mixture. Δ is a diffusion time, and in the present embodiment Δ is 5 ms. R and R are random parameters, FLAnd FTAre all empirical expressions, FLAnd FTHas been derived from Sukstanskii (Sukstanskit al. magnetic response in Medicine,2012,67: 856-. Randomly generating an R value within a range of R values corresponding to the real lung, and an R value within a range of R values corresponding to the real lung: the range of R values for the real lung is (360 + -60) μm, and the range of R values for the real lung is (160 + -30) μm. Using the equation (1), b is 10, 20, 30, 40s/cm2DWI image x ofb. Finally, DWI image x of each b valuebA DWI image composed of a set of multiple channels, as a fully sampled DWI image x: x ═ x0,x10,…,x40]. The size of the fully sampled DWI image x is 64 × 64 × 5.
And 1.3, establishing a human body multi-core DWI image training set. An undersampled matrix U is generated at a sampling rate of 1/4 and an undersampled DWI image y is obtained by retrospectively undersampling the fully sampled DWI image x with the undersampled matrix U, as shown in fig. 1. Similarly, y ═ y0,y10,…,y40]. And the undersampled DWI image y and the fully sampled DWI image x form a human multi-kernel DWI image training set.
And 2, establishing a human body multi-core DWI combined reconstruction model. The human multi-core DWI joint reconstruction model is represented as G (·, theta), representing model input, and theta is a model parameter. Since the undersampled DWI image y is a complex-valued image, the real part and the imaginary part of the undersampled DWI image y are respectively taken as different channels in the embodiment, and thus the size of the model input of the human multi-kernel DWI joint reconstruction model is 64 × 64 × 10. The human multi-core DWI combined reconstruction model comprises a residual dense module and a data consistency layer. The undersampled DWI image y shares the characteristics in the residual dense module, so that the data redundancy of the DWI image on the space and diffusion gradient method can be fully mined, and the reconstruction effect is improved. The residual dense module comprises three parts, which are respectively specialThe system comprises a sign extraction layer, a dense module and a reconstruction layer containing residual connection. The feature extraction layer extracts features from the undersampled DWI image y using a 3 × 3 convolution to generate a first feature map and inputs the first feature map to the dense module. The dense module further extracts the features of the first feature map to obtain a second feature map, inputs the second feature map into a reconstruction layer containing residual connection, and fully utilizes the hierarchical features of all convolution layers to avoid the problems of information loss and gradient disappearance between convolution layers. The reconstruction layer containing residual error connection synthesizes the second characteristic graph into a residual error image by using convolution of 1 multiplied by 1, and then the residual error image is processed by using the residual error connection to obtain a primary reconstruction image xc. Will preliminarily reconstruct the image xcInput data consistency layer obtaining reconstructed image
Figure BDA0002347416030000071
Figure BDA0002347416030000072
FHIs an inverse fourier transform. The specific operations of the data consistency layer comprise: the data consistency layer will preliminarily reconstruct the image xcSubstitution into equation (3) to obtain k-space data k of data consistencyDCK-space data k for data consistencyDCPerforming inverse Fourier transform to obtain final reconstructed image
Figure BDA0002347416030000073
In a similar manner to that described above,
Figure BDA0002347416030000074
the human multi-core DWI combined reconstruction model can be built by using a deep learning tool kit TensorFlow in a computer application software Python 3.6 environment.
Figure BDA0002347416030000075
Wherein k isc=Fxc,k0Fy, F is the fourier transform, j is the k-space coordinate, kDC(j) K-space data for data consistency at jkDCThe value of (b), Ω represents the set of k-space coordinates sampled in the undersampled DWI image y.
And 3, defining a loss function. The loss function L (theta) of the human multi-kernel DWI joint reconstruction model G (·, theta) includes pixel-level loss and apparent diffusion coefficient loss:
L(θ)=E[||x-G(y,θ)||l2]+ηE[||Ψ(x)-Ψ(G(y,θ))||l2]formula (4)
Wherein
Figure BDA0002347416030000076
Representing the desired operation, y is the undersampled DWI image and G (y, θ) is the final reconstructed image
Figure BDA0002347416030000077
||·||l2Expressing the norm of L2, Ψ represents the estimation function of the apparent diffusion coefficient, and η ═ 0.001 is the weighting factor of the apparent diffusion coefficient loss.
And 4, training a human body multi-core DWI combined reconstruction model. The Adam algorithm [ Kingma, et al. arXivpreprint,2014, arXiv:1412.6980 was used.]Training a human body multi-core DWI combined reconstruction model, and searching for a model parameter theta which enables a loss function L (theta) to be minimum, wherein the model parameter theta which enables the loss function L (theta) to be minimum is
Figure BDA0002347416030000078
Namely, the following conditions are satisfied:
Figure BDA0002347416030000079
the learning rate of the Adam algorithm is 0.0002, the first order momentum is set to 0.9, and the second order momentum is set to 0.999. After training is finished, the model parameter theta of the human multi-core DWI combined reconstruction model is fixed
Figure BDA0002347416030000081
Figure BDA0002347416030000082
Can be used to reconstruct new hyperpolarizations129Xe pulmonary DWI images.
And 5, performing combined reconstruction on the multi-core DWI image of the target human body. To give
Figure BDA0002347416030000083
Inputting a new undersampled DWI image y (shown in FIG. 2B), and performing forward propagation on the model to obtain a final reconstructed image containing different B values
Figure BDA0002347416030000084
Figure BDA0002347416030000085
In a similar manner to that described above,
Figure BDA0002347416030000086
fig. 2A is a full sampling image, which includes 5 b values (b is 0, 10, 20, 30, 40 s/cm)2) Hyperpolarization of129Xe pulmonary DWI images. Fig. 2B is an undersampled DWI image y at a sampling rate of 1/4, which has lost most of the structural and detail information and contains severe undersampling artifacts. Although the conventional CS-MRI method can recover part of the structural information, fig. 2C contains a part of the artifact and a significant smoothing effect. As shown in FIG. 2D, the method of the present invention successfully removes the undersampling artifacts and accurately recovers the hyperpolarized DWI image structure and detail information. In addition, the method only needs the forward propagation of the CNN model, and the reconstruction speed is high.
The specific embodiments described herein are merely illustrative of the invention. The AI method in the present invention is not limited to CNN, and may include Recurrent Neural Network (RNN) and the like. The multi-core DWI in the present invention is not limited to the embodiments129Xe, may also be3He、19F, etc., the present invention is also applicable to conventional ones1Under-sampled reconstruction of the HDWI. The CNN model is not limited to RDB, and can be a residual error network, U-Net and the like. The CNN model training method is not limited to Adam, and also comprises an optimization algorithm commonly used in deep learning such as RMSProp.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (5)

1.一种基于AI的人体多核DWI联合重建方法,其特征在于,包括以下步骤:1. a human body multi-core DWI joint reconstruction method based on AI, is characterized in that, comprises the following steps: 步骤1、建立人体多核DWI图像训练集,人体多核DWI图像训练集包括欠采样DWI图像y和全采样DWI图像x,Step 1. Establish a human body multi-core DWI image training set. The human body multi-core DWI image training set includes an undersampled DWI image y and a fully sampled DWI image x, 步骤2,建立人体多核DWI联合重建模型,人体多核DWI联合重建模型表示为G(·,θ),·表示模型输入,θ为模型参数,Step 2, establish a human body multi-core DWI joint reconstruction model, the human body multi-core DWI joint reconstruction model is expressed as G(·,θ), · represents the model input, θ is the model parameter, 步骤3,定义人体多核DWI联合重建模型G(·,θ)的损失函数L(θ)Step 3: Define the loss function L(θ) of the human multi-core DWI joint reconstruction model G(·,θ) L(θ)=E[||x-G(y,θ)||l2]+ηE[||Ψ(x)-Ψ(G(y,θ))||l2]L(θ)=E[||xG(y,θ)|| l2 ]+ηE[||Ψ(x)-Ψ(G(y,θ))|| l2 ] 其中,
Figure FDA0002347416020000011
表示期望操作,y为欠采样DWI图像,||·||l2表示L2范数,Ψ表示表观扩散系数的估计函数,η为表观扩散系数损失的权重系数,
in,
Figure FDA0002347416020000011
represents the desired operation, y is the undersampled DWI image, ||·|| l2 represents the L2 norm, Ψ represents the estimation function of the apparent diffusion coefficient, η is the weight coefficient of the apparent diffusion coefficient loss,
步骤4,采用梯度下降算法训练人体多核DWI联合重建模型,寻找使损失函数L(θ)最小的模型参数θ,损失函数L(θ)最小的模型参数θ为
Figure FDA0002347416020000012
Step 4: Use the gradient descent algorithm to train the multi-core DWI joint reconstruction model of the human body, and find the model parameter θ that minimizes the loss function L(θ), and the model parameter θ that minimizes the loss function L(θ) is:
Figure FDA0002347416020000012
步骤5,给
Figure FDA0002347416020000013
输入新的欠采样DWI图像y,经过模型正向传播,即可得到包含不同b值的最终重建图像
Figure FDA0002347416020000014
Step 5, give
Figure FDA0002347416020000013
Input a new undersampled DWI image y, and through the forward propagation of the model, the final reconstructed image containing different b values can be obtained
Figure FDA0002347416020000014
2.根据权利要求1所述的一种基于AI的人体多核DWI联合重建方法,其特征在于,所述的步骤1包括以下步骤:2. a kind of AI-based human body multi-core DWI joint reconstruction method according to claim 1, is characterized in that, described step 1 comprises the following steps: 步骤1.1、从磁共振成像仪上获取全采样的扩散敏感因子b值为0的DWI图像xbStep 1.1. Obtain a fully sampled DWI image x b with a diffusion sensitivity factor b value of 0 from a magnetic resonance imager, 步骤1.2、生成全采样DWI图像x,利用扩散敏感因子b值为0的DWI图像xb、DWI信号扩散模型,获得各个b值的DWI图像xb,各个b值的DWI图像xb组合为全采样DWI图像x,Step 1.2, generate a fully sampled DWI image x, use the DWI image x b with the diffusion sensitivity factor b value of 0 and the DWI signal diffusion model to obtain the DWI image x b of each b value, and the DWI image x b of each b value is combined into a full sample DWI image x, 步骤1.3、建立人体多核DWI图像训练集,生成欠采样矩阵U,对全采样DWI图像x利用欠采样矩阵U进行回顾性欠采样,获得欠采样DWI图像y。欠采样DWI图像y和全采样DWI图像x组成人体多核DWI图像训练集。Step 1.3: Establish a training set of human multi-core DWI images, generate an undersampling matrix U, and perform retrospective undersampling on the fully sampled DWI image x using the undersampling matrix U to obtain an undersampling DWI image y. The undersampled DWI image y and the fully sampled DWI image x constitute the training set of human multi-core DWI images. 3.根据权利要求2所述的一种基于AI的人体多核DWI联合重建方法,其特征在于,所述的步骤1.2中,DWI信号扩散模型为:3. a kind of AI-based human body multi-core DWI joint reconstruction method according to claim 2, is characterized in that, in described step 1.2, DWI signal diffusion model is:
Figure FDA0002347416020000021
Figure FDA0002347416020000021
其中,b为扩散敏感因子,DL、DT分别为纵向扩散系数和横向扩散系数,Φ为误差函数,x0扩散敏感因子b值为0的DWI图像。Among them, b is the diffusion sensitivity factor, DL and DT are the longitudinal diffusion coefficient and the transverse diffusion coefficient, respectively, Φ is the error function, and x 0 is the DWI image with the diffusion sensitivity factor b value of 0.
4.根据权利要求1所述的一种基于AI的人体多核DWI联合重建方法,其特征在于,所述的人体多核DWI联合重建模型包括残差密集模块和数据一致性层,残差密集模块包含三个部分,分别为特征提取层、密集模块、包含残差连接的重建层,4. a kind of AI-based human body multi-core DWI joint reconstruction method according to claim 1, is characterized in that, described human body multi-core DWI joint reconstruction model comprises residual error dense module and data consistency layer, and the residual error dense module comprises Three parts, namely feature extraction layer, dense module, reconstruction layer containing residual connection, 特征提取层从模型输入中提取特征生成第一特征图并将第一特征图输入到密集模块,密集模块对第一特征图做进一步特征提取获得第二特征图,并将第二特征图输入到包含残差连接的重建层,包含残差连接的重建层将第二特征图合成为残差图像,然后使用残差连接对残差图像进行处理获得初步重建图像xc,将初步重建图像xc输入数据一致性层获得最终重建图像
Figure FDA0002347416020000024
The feature extraction layer extracts features from the model input to generate the first feature map and inputs the first feature map to the dense module. The dense module performs further feature extraction on the first feature map to obtain the second feature map, and inputs the second feature map to the dense module. The reconstruction layer containing residual connections, the reconstruction layer containing residual connections synthesizes the second feature map into a residual image, and then uses the residual connection to process the residual image to obtain a preliminary reconstructed image x c , and the preliminary reconstructed image x c is Input data consistency layer to obtain final reconstructed image
Figure FDA0002347416020000024
5.根据权利要求4所述的一种基于AI的人体多核DWI联合重建方法,其特征在于,将初步重建图像xc输入数据一致性层获得最终重建图像
Figure FDA0002347416020000025
包括以下步骤:
5. a kind of AI-based human body multi-core DWI joint reconstruction method according to claim 4, is characterized in that, initial reconstruction image x c is input data consistency layer to obtain final reconstruction image
Figure FDA0002347416020000025
Include the following steps:
数据一致性层将初步重建图像xc代入到以下公式中获得数据一致性的k空间数据kDC,对数据一致性的k空间数据kDC进行反傅立叶变换获得最终重建图像
Figure FDA0002347416020000022
The data consistency layer substitutes the preliminary reconstructed image x c into the following formula to obtain the data-consistent k-space data k DC , and performs inverse Fourier transform on the data-consistent k-space data k DC to obtain the final reconstructed image
Figure FDA0002347416020000022
Figure FDA0002347416020000023
Figure FDA0002347416020000023
其中,kc=Fxc,k0=Fy,F为傅立叶变换,j为k空间坐标,kDC(j)为j处的数据一致性的k空间数据kDC的值,Ω表示欠采样DWI图像y中采样的k空间坐标集合。Among them, k c =Fx c , k 0 =Fy, F is the Fourier transform, j is the k-space coordinate, k DC (j) is the value of the k-space data k DC of the data consistency at j, Ω represents the undersampling DWI The set of k-space coordinates sampled in image y.
CN201911400857.1A 2019-12-30 2019-12-30 An AI-based method for joint reconstruction of human multi-core DWI Active CN111161370B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911400857.1A CN111161370B (en) 2019-12-30 2019-12-30 An AI-based method for joint reconstruction of human multi-core DWI

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911400857.1A CN111161370B (en) 2019-12-30 2019-12-30 An AI-based method for joint reconstruction of human multi-core DWI

Publications (2)

Publication Number Publication Date
CN111161370A true CN111161370A (en) 2020-05-15
CN111161370B CN111161370B (en) 2021-10-29

Family

ID=70559371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911400857.1A Active CN111161370B (en) 2019-12-30 2019-12-30 An AI-based method for joint reconstruction of human multi-core DWI

Country Status (1)

Country Link
CN (1) CN111161370B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112085197A (en) * 2020-09-11 2020-12-15 推想医疗科技股份有限公司 Neural network model training method and device, storage medium and electronic equipment
CN113066145A (en) * 2021-04-29 2021-07-02 武汉聚垒科技有限公司 Rapid whole-body diffusion weighted imaging method based on deep learning and related equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102309328A (en) * 2011-10-19 2012-01-11 中国科学院深圳先进技术研究院 Diffusion-tensor imaging method and system
CN106249183A (en) * 2016-09-24 2016-12-21 中国科学院武汉物理与数学研究所 A kind of hyperpolarization xenon magnetic resonance method based on spectrum picture integration
CN108717717A (en) * 2018-04-23 2018-10-30 东南大学 The method rebuild based on the sparse MRI that convolutional neural networks and alternative manner are combined
CN109256023A (en) * 2018-11-28 2019-01-22 中国科学院武汉物理与数学研究所 A kind of measurement method of pulmonary airways microstructure model
CN109410289A (en) * 2018-11-09 2019-03-01 中国科学院武汉物理与数学研究所 A kind of high lack sampling hyperpolarized gas lung MRI method for reconstructing of deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102309328A (en) * 2011-10-19 2012-01-11 中国科学院深圳先进技术研究院 Diffusion-tensor imaging method and system
CN106249183A (en) * 2016-09-24 2016-12-21 中国科学院武汉物理与数学研究所 A kind of hyperpolarization xenon magnetic resonance method based on spectrum picture integration
CN108717717A (en) * 2018-04-23 2018-10-30 东南大学 The method rebuild based on the sparse MRI that convolutional neural networks and alternative manner are combined
CN109410289A (en) * 2018-11-09 2019-03-01 中国科学院武汉物理与数学研究所 A kind of high lack sampling hyperpolarized gas lung MRI method for reconstructing of deep learning
CN109256023A (en) * 2018-11-28 2019-01-22 中国科学院武汉物理与数学研究所 A kind of measurement method of pulmonary airways microstructure model

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
JIANPING ZHONG等: "Simultaneous assessment of both lung morphometry and gas exchange function within a single breath‐hold by hyperpolarized 129Xe MRI", 《NMR IN BIOMEDICINE》 *
RONGZHAO ZHANG: "Automatic Segmentation of Acute Ischemic Stroke From DWI Using 3-D Fully Convolutional DenseNets", 《IEEE TRANSACTIONS ON MEDICAL IMAGING》 *
张会婷: "超极化~(129)Xe扩散加权MRI方法及其对肺部疾病的研究", 《中国博士学位论文全文数据库(医药卫生科技辑)》 *
王平等: "基于3D深度残差网络与级联U-Net的缺血性脑卒中病灶分割算法", 《计算机应用》 *
王科等: "基于非高斯分布模型的扩散加权成像在体部疾病中的应用", 《磁共振成像》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112085197A (en) * 2020-09-11 2020-12-15 推想医疗科技股份有限公司 Neural network model training method and device, storage medium and electronic equipment
CN112085197B (en) * 2020-09-11 2022-07-22 推想医疗科技股份有限公司 Neural network model training method and device, storage medium and electronic equipment
CN113066145A (en) * 2021-04-29 2021-07-02 武汉聚垒科技有限公司 Rapid whole-body diffusion weighted imaging method based on deep learning and related equipment
CN113066145B (en) * 2021-04-29 2023-12-26 武汉聚垒科技有限公司 Deep learning-based rapid whole-body diffusion weighted imaging method and related equipment

Also Published As

Publication number Publication date
CN111161370B (en) 2021-10-29

Similar Documents

Publication Publication Date Title
Sandino et al. Accelerating cardiac cine MRI using a deep learning‐based ESPIRiT reconstruction
CN106780372B (en) A Weighted Kernel Norm Magnetic Resonance Imaging Reconstruction Method Based on Generalized Tree Sparse
Poddar et al. Dynamic MRI using smoothness regularization on manifolds (SToRM)
US11170543B2 (en) MRI image reconstruction from undersampled data using adversarially trained generative neural network
Hamilton et al. Machine learning for rapid magnetic resonance fingerprinting tissue property quantification
CN106485764B (en) Fast and Accurate Reconstruction Method of MRI Image
CN110148215B (en) Four-dimensional magnetic resonance image reconstruction method based on smooth constraint and local low-rank constraint model
Shen et al. Rapid reconstruction of highly undersampled, non‐Cartesian real‐time cine k‐space data using a perceptual complex neural network (PCNN)
CN104013403B (en) A kind of three-dimensional cardiac MR imaging method based on resolution of tensor sparse constraint
CN110942496A (en) Magnetic Resonance Image Reconstruction Method and System Based on Propeller Sampling and Neural Network
CN111161370B (en) An AI-based method for joint reconstruction of human multi-core DWI
CN111784792A (en) A fast magnetic resonance reconstruction system based on double-domain convolutional neural network and its training method and application
CN106618571A (en) Nuclear magnetic resonance imaging method and system
CN113192151B (en) MRI image reconstruction method based on structural similarity
CN113506258B (en) Under-sampling lung gas MRI reconstruction method for multitask complex value deep learning
CN104597419A (en) Method for correcting motion artifacts in combination of navigation echoes and compressed sensing
Hou et al. Pncs: Pixel-level non-local method based compressed sensing undersampled mri image reconstruction
CN105488757B (en) A kind of method of the sparse reconstruction of brain fiber
CN113866694B (en) Rapid three-dimensional magnetic resonance T1 quantitative imaging method, system and medium
CN115471580A (en) A physically intelligent high-definition magnetic resonance diffusion imaging method
Konovalov Compressed-sensing-inspired reconstruction algorithms in low-dose computed tomography: A review
CN106093814A (en) A kind of cardiac magnetic resonance imaging method based on multiple dimensioned low-rank model
CN112489150B (en) Multi-scale sequential training method of deep neural network for rapid MRI
CN118570323A (en) Physical intelligent high-definition diffusion tensor magnetic resonance reconstruction and quantification method
JP6730995B2 (en) Method and system for generating MR image of moving object in environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211011

Address after: 430071 Xiao Hong, Wuchang District, Wuhan District, Hubei, Shanxi, 30

Applicant after: Institute of precision measurement science and technology innovation, Chinese Academy of Sciences

Address before: 430071 Xiao Hong, Wuchang District, Wuhan District, Hubei, Shanxi, 30

Applicant before: WUHAN INSTITUTE OF PHYSICS AND MATHEMATICS, CHINESE ACADEMY OF SCIENCES

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant