CN112700508A - Multi-contrast MRI image reconstruction method based on deep learning - Google Patents

Multi-contrast MRI image reconstruction method based on deep learning Download PDF

Info

Publication number
CN112700508A
CN112700508A CN202011589581.9A CN202011589581A CN112700508A CN 112700508 A CN112700508 A CN 112700508A CN 202011589581 A CN202011589581 A CN 202011589581A CN 112700508 A CN112700508 A CN 112700508A
Authority
CN
China
Prior art keywords
contrast
mri image
sampling
mri
full
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011589581.9A
Other languages
Chinese (zh)
Other versions
CN112700508B (en
Inventor
蔡越
罗玉
凌捷
柳毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202011589581.9A priority Critical patent/CN112700508B/en
Publication of CN112700508A publication Critical patent/CN112700508A/en
Application granted granted Critical
Publication of CN112700508B publication Critical patent/CN112700508B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention provides a multi-contrast MRI image reconstruction method based on deep learning, relating to the technical field of medical image processing, a training set sample is formed by collecting a real full-sampling MRI image and a reconstructed MRI image sampled randomly in a K space, a deep convolutional neural network is trained, then the undersampled MRI images with different contrasts are used as input, the primary fully sampled MRI images with different contrasts are output, the encoder extracts the structural features and contrast features of the primary fully sampled MRI images with different contrasts and then carries out similarity constraint, finally, the generator generates the final complete MRI images with different contrasts, the defect that most of the current models are reconstructed only by using the MRI images with single contrast is overcome, but the utilization of multi-contrast MRI image related information is lacked for reconstruction, so that the reconstruction quality of the MRI image is improved, and the reliability of the diagnosis result of the medical system is ensured.

Description

Multi-contrast MRI image reconstruction method based on deep learning
Technical Field
The invention relates to the technical field of medical image processing, in particular to a multi-contrast MRI image reconstruction method based on deep learning.
Background
Magnetic Resonance imaging (1 imaging Resonance I1 imaging, MRI for short) is an important and widely used medical imaging technique that can be used for imaging internal structures of the human body. Generally, 1R scanning can obtain images with different contrasts, such as T1 and T2 contrasts, in actual diagnosis, doctors need to perform disease diagnosis by combining complete MRI images with multiple contrasts, so that a patient needs to be scanned for a plurality of times for a long time, but the increase of the scanning time not only causes discomfort of the patient, but also may introduce motion artifacts, and reduces the use efficiency of magnetic resonance; reducing the scanning time reduces the data acquisition amount, which further causes the degradation of the quality of the MRI image, and is not beneficial to the accurate diagnosis of the medical system, so the contradiction between the data acquisition time and the image quality promotes the research on the MRI image compression sensing and image reconstruction technology.
Generally speaking, methods of MRI image reconstruction fall into two broad categories: the first type is based on the traditional compressed sensing theory, and MRI image reconstruction is carried out in situ; the second type is based on deep learning technology, and depends on information in the MRI image data set to carry out MRI image reconstruction under large data drive.
The first method is derived from a compressed sensing theory (D.L.Donooh, Co1 compressed sensing, 2006) proposed by Donooh, breaks through the limitation of the traditional sampling theorem, fully utilizes the signal itself or can sparsely represent the prior information on a transform domain, directly samples a small number of data points by random projection, and recovers the original signal in situ by using a nonlinear reconstruction algorithm under the condition of satisfying the limited isometry property (T.Tao, et al.Stable signal recovery from 1 inco1 plex and inaccurate 1 benefits 1 entries, 2006) proposed by Tao and the like. The method has good interpretability, but cannot realize complete random sampling due to the limitation of an MRI hardware sampling system. Therefore, Lustig et al further extend The theory for The problem of MRI image reconstruction, and propose random sampling on The basis of fixed transformation, reconstructing The MRI image with sparsity constraint, and applying compressed sensing to The medical image field for The first time [1.Lustig, D.Donhoh, and J.1.Paul, Sparse MRI: The application of co1 compressed sensing for rapid 1R i1 imaging, 2007], but The non-adaptive transformation basis leads to The limitation of The representation capability of The model, and in order to improve The adaptive capability of The model, part of research works are dedicated to reconstruction with The geometric information of The image [ X.Qu et al. Understand MRI reconstruction with path-based direction, 2012], although The model based on adaptive transformation has higher reconstruction quality, it needs heavy calculation cost.
The second method relies on the deep learning technique proposed by Hinton et al to show significant advantages in developing large data resources using deep neural networks [ Geoffrey E.Hinton et al.A fast learning algorithm 1 for deep belief nets,2006 ]. Unlike the first type of image reconstruction methods, the deep network structure is able to extract features from large amounts of data to build increasingly abstract representations, replacing the traditional methods of manually extracting features and manually creating algorithms. On the other hand, the image reconstruction algorithm based on deep learning does not need clear prior hypothesis, only the mapping relation between high-resolution images and low-resolution images needs to be learned from the training sample, and network structures such as a convolutional neural network and the like have good nonlinear expression capability and mapping capability, so that the image reconstruction method based on deep learning has a very good effect, but the strong expression capability of deep learning is difficult to theoretically obtain reasonable explanation at present. In the existing method for reconstructing MRI images based on the deep learning technique, a mapping from undersampling to fully sampled MRI images is established by using a common convolutional neural network through a large-scale MRI image data set [ s.wang et al.adaptive 1 imaging residual i1 imaging via discrete learning,2016], and researchers have proposed a deep cascaded convolutional neural network [ j.schle1per, et al.a. discrete mapping of a volumetric neural network for 1R i1 image reconstruction,2017 ]. However, the method proposed above only obtains good speed and performance in single contrast MRI image reconstruction, but usually MRI scanning can obtain different contrasts in the same human body part, such as T1 and T2, and the reconstruction effect of MRI images in multiple contrasts is poor, so that the accuracy of the diagnosis result cannot be ensured.
Disclosure of Invention
In order to solve the problem that the current MRI image reconstruction method cannot ensure the MRI image reconstruction effect under multiple contrasts, the invention provides a multi-contrast MRI image reconstruction method based on deep learning, which improves the reconstruction quality of an MRI image and ensures the accuracy and reliability of the diagnosis result of a medical system.
In order to achieve the technical effects, the technical scheme of the invention is as follows:
a multi-contrast MRI image reconstruction method based on deep learning at least comprises the following steps:
s1, collecting real full-sampling MRI images to form a training set real label for supervision, and training a deep convolutional neural network Convnet (DEG) by using the training set sample;
s2, combining T1Using contrast undersampled MRI image as input of deep convolution neural network Convnet (DEG), and outputting T1Contrast preliminary full-sampling MRI images; will T2Using contrast undersampled MRI image as input of deep convolution neural network Convnet (DEG), and outputting T2Preliminary full-sampling MRI image of contrast, T1And T2MRI images representing different contrasts;
s3, utilizing an encoder to pair T1Extracting structural features and contrast features of the contrast primary full-sampling MRI image; using encoder pairs T2Extracting structural features and contrast features of the contrast primary full-sampling MRI image;
s4, extracting T1Structural feature and T of contrast primary full-sampling MRI image2Carrying out similarity constraint on structural features of the contrast primary full-sampling MRI image;
s5, constraining the similarity T1Structural feature and T of contrast primary full-sampling MRI image1Contrast characteristics of the primary full-sampling MRI image are input into a generator to generate a final complete T1Contrast MRI images; constrained similarity T2Structural feature and T of contrast primary full-sampling MRI image2Contrast characteristics of the primary full-sampling MRI image are input into a generator to generate a final complete T2Contrast MRI images.
In the technical scheme, based on the deep learning technology, a deep convolutional neural network is trained by collecting real full-sampling MRI images to form a training set sample, then, the under-sampled MRI images with different contrasts are used as the input of a deep convolutional neural network Convnet (·), the preliminary full-sampled MRI images with different contrasts are output, extracting structural features and contrast features of the primary full-sampling MRI images with different contrasts through an encoder, then carrying out similarity constraint, finally generating final complete MRI images with different contrasts through a generator, the method and the device can reconstruct the multiple contrast undersampled MRI images simultaneously, overcome the defect that most of the current models only reconstruct single-contrast MRI images and have poor effect when multi-contrast MRI images are reconstructed, improve the reconstruction quality of the MRI images and ensure the accuracy and reliability of the diagnosis result of a medical system.
Preferably, the method further comprises: will be a real T1Contrast fully sampled MRI image, full T generated by a generator1The contrast MRI image is input into a discriminator to determine the true T2Contrast fully sampled MRI image, full T generated by a generator2Contrast ratio MRI images are input into the discriminator, and are subjected to countermeasure training to form a reconstruction model so as to improve the final reconstruction function of the generator.
Preferably, the deep convolutional neural network Convnet () of step S1 is trained by a gradient descent method.
Preferably, the deep convolutional neural network Convnet (·) maps undersampled MRI images of different contrasts to fully sampled MRI images.
Preferably, T is said in step S21Contrast undersampled MRI images from true T1Obtaining a contrast full-sampling MRI image through a first processing flow; t is2Contrast undersampled MRI images from true T2Obtaining a contrast full-sampling MRI image through a second processing flow; the process of the first processing flow comprises the following steps:
1) for real T1Fourier transform is carried out on the contrast full-sampling MRI image;
2) for T under K space1Randomly down-sampling the contrast full-sampling MRI image according to the M proportion;
3) replacing the lost Fourier coefficient under the K space after random down-sampling with zero;
4) performing inverse Fourier transform to obtain T1Contrast undersampled MRI images;
the second processing flow comprises the following steps:
A. for real T2Fourier transform is carried out on the contrast full-sampling MRI image;
B. for T under K space2Randomly down-sampling the contrast full-sampling MRI image according to the M proportion;
C. replacing the lost Fourier coefficient under the K space after random down-sampling with zero;
D. performing inverse Fourier transform to obtain T2Contrast undersampled MRI images.
The above processes are all for simulating the under-sampled MRI images with different contrasts in the real scene, and the reliability of the method provided by the invention is ensured.
Preferably, the true T1Contrast full-sample MRI images and true T2The contrast full-sampling MRI images appear in pairs and are collected from the same human body part, so that the structural features generated by two different contrast images in the encoding stage can be subjected to similarity constraint based on the consistency of the structural features.
Preferably, the undersampled MRI image xi∈RNSatisfies the following conditions:
yi=G(E(Convnet(xi)))
wherein x isi∈RNThe MRI image is undersampled, can be any one of multiple contrasts, and is input into a deep convolutional neural network Convnet (·); y isi∈RNTo the finally obtained fully sampled MRI image; e (-) represents an encoder for extracting structural features and contrast features from the primary full-sampling MRI image; g (-) is a generator that generates a complete MRI image with the structural features and contrast features as inputs.
Preferably, the deep convolutional neural network Convnet () adopts a supervised training mode, and the expression of the loss function is as follows:
Figure BDA0002866648590000041
wherein L ispriA loss function representing a deep convolutional neural network Convnet (·);
Figure BDA0002866648590000042
the method comprises the steps that an i-th real full-sampling MRI image is subjected to down-sampling under K space and zero filling to obtain an under-sampling MRI image of a simulated reality scene;
Figure BDA0002866648590000043
for the ith true full sample TjContrast MRI images; fu(. h) a fourier transform function that converts the MRI image to K-space; d (-) is a random down-sampling function, randomly down-sampled from K space of the complete MRI image according to the M proportion; n is the total number of samples; theta1And theta2Are all loss functions LpriThe weight coefficient of (2).
Preferably, the extracted T is1Structural feature and T of contrast primary full-sampling MRI image2After similarity constraint is carried out on the structural features of the primary full-sampling MRI image with the contrast, the structural consistency loss expression is as follows:
Figure BDA0002866648590000051
wherein L iscntRepresents a loss of structural consistency, CiTm∈RNAnd CiTm+1∈RNRespectively represents T1Contrast and T2Structural features of contrast MRI images; n is the total number of samples.
Preferably, the deep convolutional neural network Convnet (-) and the encoder, the generator and the discriminator constitute a generation countermeasure network, and the loss function of the generation countermeasure network comprises self-coding loss LateAnd the loss L of the discriminatordisAnd loss of structural consistency Lcnt
Self-coding loss LateWatch (A)The expression is as follows:
Figure BDA0002866648590000052
wherein the content of the first and second substances,
Figure BDA0002866648590000053
the method comprises the steps that an i-th real full-sampling MRI image is subjected to down-sampling under K space and zero filling to obtain an under-sampling MRI image of a simulated reality scene; n is the total number of samples;
discriminator loss LdisThe expression of (a) is:
Figure BDA0002866648590000054
wherein Dis () represents a discriminator;
Figure BDA0002866648590000055
for the ith true full sample TjContrast MRI images;
total loss function L of the reconstructed modeltotalThe expression of (a) is:
Ltotal=ρ1·Lpri+ρ2·Late+ρ3·Ldis+ρ4·Lcnt
where ρ 1, ρ 2, ρ 3, ρ 4 each represent a weight coefficient of the loss function.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the invention provides a multi-contrast MRI image reconstruction method based on deep learning, which is characterized in that a training set sample is formed by collecting real full-sampling MRI images, a deep convolutional neural network is trained, then the under-sampled MRI images with different contrasts are used as the input of the deep convolution neural network, the preliminary full-sampled MRI images with different contrasts are output, extracting structural features and contrast features of the primary full-sampling MRI images with different contrasts through an encoder, then carrying out similarity constraint, finally generating final complete MRI images with different contrasts through a generator, the method and the device can reconstruct the multiple contrast undersampled MRI images simultaneously, overcome the defect that most of the current models only reconstruct single-contrast MRI images and have poor effect when multi-contrast MRI images are reconstructed, improve the reconstruction quality of the MRI images and ensure the accuracy and reliability of the diagnosis result of a medical system.
Drawings
Fig. 1 is a flowchart of a deep learning-based multi-contrast MRI image reconstruction method proposed in an embodiment of the present invention;
fig. 2 is a frame diagram of a reconstruction model for performing MRI image reconstruction with different contrast undersampling according to an embodiment of the present invention.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for better illustration of the present embodiment, certain parts of the drawings may be omitted, enlarged or reduced, and do not represent actual dimensions;
it will be understood by those skilled in the art that certain well-known descriptions of the figures may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
The positional relationships depicted in the drawings are for illustrative purposes only and are not to be construed as limiting the present patent;
a flow chart of a method for multi-contrast MRI image reconstruction based on deep learning as shown in fig. 1, see fig. 1, the steps of the method comprising:
a multi-contrast MRI image reconstruction method based on deep learning at least comprises the following steps:
s1, collecting real full-sampling MRI images to form a training set real label for supervision, and training a deep convolutional neural network Convnet (DEG) by using the training set sample;
s2, combining T1Using contrast undersampled MRI image as input of deep convolution neural network Convnet (DEG), and outputting T1Contrast preliminary full-sampling MRI images; will T2Using contrast undersampled MRI image as input of deep convolution neural network Convnet (DEG), and outputting T2Preliminary full-sampling MRI image of contrast, T1And T2MRI images representing different contrasts;
s3, utilizing an encoder to pair T1Extracting structural features and contrast features of the contrast primary full-sampling MRI image; using encoder pairs T2Extracting structural features and contrast features of the contrast primary full-sampling MRI image;
s4, extracting T1Structural feature and T of contrast primary full-sampling MRI image2Carrying out similarity constraint on structural features of the contrast primary full-sampling MRI image;
s5, constraining the similarity T1Structural feature and T of contrast primary full-sampling MRI image1Contrast characteristics of the primary full-sampling MRI image are input into a generator to generate a final complete T1Contrast MRI images; constrained similarity T2Structural feature and T of contrast primary full-sampling MRI image2Contrast characteristics of the primary full-sampling MRI image are input into a generator to generate a final complete T2Contrast MRI images. Based on the deep learning technology, a training set sample is formed by collecting real full-sampling MRI images, a deep convolutional neural network Convnet is trained, in the embodiment, the deep convolutional neural network Convnet is a VGGNet model, then under-sampling MRI images with different contrasts are input into the deep convolutional neural network Convnet, preliminary full-sampling MRI images with different contrasts are output, structural features and contrast features of the preliminary full-sampling MRI images with different contrasts are extracted through a coder and then subjected to similarity constraint, finally, complete MRI images with different contrasts are generated through a generator, namely, under-sampling MRI images with various contrasts are reconstructed at the same time, the defect that most of current models only reconstruct MRI images with single contrast and the effect is poor when MRI images with multiple contrasts are reconstructed is overcome, and the reconstruction quality of the MRI images is improved, the accuracy and the reliability of the diagnosis result of the medical system are ensured.
In the present embodiment, the methodThe method also comprises the following steps: will be a real T1Contrast fully sampled MRI image, full T generated by a generator1The contrast MRI image is input into a discriminator to determine the true T2Contrast fully sampled MRI image, full T generated by a generator2Contrast ratio MRI images are input into the discriminator, and are subjected to countermeasure training to form a reconstruction model so as to improve the final reconstruction function of the generator. Step S1, the deep convolutional neural network Convnet () is trained using a gradient descent method, and the deep convolutional neural network Convnet () maps undersampled MRI images of different contrasts to fully sampled MRI images.
In this embodiment, T is the step S21Contrast undersampled MRI images from true T1Obtaining a contrast full-sampling MRI image through a first processing flow; t is2Contrast undersampled MRI images from true T2Obtaining a contrast full-sampling MRI image through a second processing flow; the process of the first processing flow comprises the following steps:
1) for real T1Fourier transform is carried out on the contrast full-sampling MRI image;
2) for T under K space1Randomly down-sampling the contrast full-sampling MRI image according to the M proportion;
3) replacing the lost Fourier coefficient under the K space after random down-sampling with zero;
4) performing inverse Fourier transform to obtain T1Contrast undersampled MRI images;
the second processing flow comprises the following steps:
A. for real T2Fourier transform is carried out on the contrast full-sampling MRI image;
B. for T under K space2Randomly down-sampling the contrast full-sampling MRI image according to the M proportion;
C. replacing the lost Fourier coefficient under the K space after random down-sampling with zero;
D. performing inverse Fourier transform to obtain T2Contrast undersampled MRI images.
Here, the above processes are all for simulating undersampled MRI images of different contrasts in a real scene, and the overall process is expressed as:
fourier transform Fu() → random downsampling D () → zero-padding P () → inverse fourier transform iFu()
The random down-sampling D (-) is to randomly sample the complete MRI image in the K space according to a certain proportion, and the zero-padding P (-) is to replace the Fourier coefficient in the K space lost after the random down-sampling with zero.
In this embodiment, the true T1Contrast full-sample MRI images and true T2The contrast full-sampling MRI images appear in pairs and are collected from the same human body part, so that the structural features generated by two different contrast images in the encoding stage can be subjected to similarity constraint based on the consistency of the structural features.
Undersampled MRI image xi∈RNSatisfies the following conditions:
yi=G(E(Convnet(xi)))
wherein x isi∈RNThe MRI image is undersampled, can be any one of multiple contrasts, and is input into a deep convolutional neural network Convnet (·); y isi∈RNTo the finally obtained fully sampled MRI image; e (-) represents an encoder for extracting structural features and contrast features from the primary full-sampling MRI image; g (-) is a generator that generates a complete MRI image with the structural features and contrast features as inputs.
In this embodiment, the deep convolutional neural network Convnet (·) adopts a supervised training mode, and the expression of the loss function is as follows:
Figure BDA0002866648590000081
wherein L ispriA loss function representing a deep convolutional neural network Convnet (·);
Figure BDA0002866648590000082
shows the K-space processing of the ith real full-sampling MRI imageDownsampling, and obtaining an undersampled MRI image of the simulated reality scene after zero padding;
Figure BDA0002866648590000083
for the ith true full sample TjContrast MRI images; fu(. h) a fourier transform function that converts the MRI image to K-space; d (-) is a random down-sampling function, randomly down-sampled from K space of the complete MRI image according to the M proportion; n is the total number of samples; theta1And theta2Are all loss functions LpriThe weight coefficient of (2).
Extracting T1Structural feature and T of contrast primary full-sampling MRI image2After similarity constraint is carried out on the structural features of the primary full-sampling MRI image with the contrast, the structural consistency loss expression is as follows:
Figure BDA0002866648590000091
wherein L iscntRepresents a loss of structural consistency, CiTm∈RNAnd CiTm+1∈RNRespectively represents T1Contrast and T2Structural features of contrast MRI images; n is the total number of samples.
As shown in fig. 2, the MRI image reconstruction with two different contrasts of T1 and T2 is taken as an example for further explanation,
Figure BDA0002866648590000092
indicating true full sample T for ith1The contrast ratio MRI image is subjected to down-sampling under K space and zero filling to obtain an under-sampled MRI image simulating a real scene,
Figure BDA0002866648590000093
indicating true full sample T for ith2The contrast ratio MRI images are down-sampled in K space and zero-filled to obtain under-sampled MRI images of the simulated real scene, and the under-sampled MRI images are respectively input into a depth volume neural network Convnet and outputT1Contrast preliminary full-sampling MRI image
Figure BDA0002866648590000094
And T2Contrast preliminary full-sampling MRI image
Figure BDA0002866648590000095
Using encoder E (-) for T1Contrast preliminary full-sampling MRI image
Figure BDA0002866648590000096
Extracting structural feature CiT1And contrast characteristic SiT1Using encoder E (-) for T2Contrast preliminary full-sampling MRI image
Figure BDA0002866648590000097
Extracting structural feature CiT2And contrast characteristic SiT2Structural feature CiT2And structural feature CiT1A structural consistency loss L is carried outcntCalculating of (2) to be extracted T1Structural feature C of contrast primary full-sampling MRI imageiT1And T2Structural feature C of contrast primary full-sampling MRI imageiT2Carrying out similarity constraint, and T after the similarity constraint1Structural feature C of contrast primary full-sampling MRI imageiT1And T1Contrast characteristic S of preliminary full-sampling MRI image of contrastiT1Input generator G (-) to generate the final complete T1Contrast MRI image, T after similarity constraint2Structural feature C of contrast primary full-sampling MRI imageiT2And T2Contrast characteristic S of preliminary full-sampling MRI image of contrastiT2Input generator G (-) to generate the final complete T2Contrast MRI image, and the ith real T1Contrast full-sampling MRI images
Figure BDA0002866648590000098
True T2Contrast full-sampling MRI images
Figure BDA0002866648590000099
Complete T generated by the Generator1Contrast MRI images and full T generated by the generator2The contrast MRI image is input to a discriminator Dis () for countermeasure training to form a reconstructed model.
The deep convolutional neural network Convnet (DEG), the encoder, the generator and the discriminator form a generation countermeasure network, and the loss function of the generation countermeasure network comprises self-coding loss LateAnd the loss L of the discriminatordisAnd loss of structural consistency Lcnt
Self-coding loss LateThe expression of (a) is:
Figure BDA0002866648590000101
wherein the content of the first and second substances,
Figure BDA0002866648590000102
the method comprises the steps that an i-th real full-sampling MRI image is subjected to down-sampling under K space and zero filling to obtain an under-sampling MRI image of a simulated reality scene; n is the total number of samples;
discriminator loss LdisThe expression of (a) is:
Figure BDA0002866648590000103
wherein Dis () represents a discriminator;
Figure BDA0002866648590000104
for the ith true full sample TjContrast MRI images;
total loss function L of the reconstructed modeltotalThe expression of (a) is:
Ltotal=ρ1·Lpri+ρ2·Late+ρ3·Ldis+ρ4·Lcnt
where ρ 1, ρ 2, ρ 3, ρ 4 each represent a weight coefficient of the loss function.
The positional relationships depicted in the drawings are for illustrative purposes only and are not to be construed as limiting the present patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A multi-contrast MRI image reconstruction method based on deep learning is characterized by at least comprising the following steps:
s1, collecting real full-sampling MRI images to form a training set real label for supervision, and training a deep convolutional neural network Convnet (DEG) by using the training set sample;
s2, combining T1Using the contrast undersampled MRI image as input of a deep convolutional neural network Convnet (DEG), and outputting T2Contrast preliminary full-sampling MRI images; will T2Using contrast undersampled MRI image as input of deep convolution neural network Convnet (DEG), and outputting T2Preliminary full-sampling MRI image of contrast, T1And T2MRI images representing different contrasts;
s3, utilizing an encoder to pair T1Extracting structural features and contrast features of the contrast primary full-sampling MRI image; using encoder pairs T2Extracting structural features and contrast features of the contrast primary full-sampling MRI image;
s4, extracting T1Structural feature and T of contrast primary full-sampling MRI image2Carrying out similarity constraint on structural features of the contrast primary full-sampling MRI image;
s5, constraining the similarity T1Structural feature and T of contrast primary full-sampling MRI image1Contrast characteristic input generator for contrast primary full-sampling MRI imageGenerating a final complete T1Contrast MRI images; constrained similarity T2Structural feature and T of contrast primary full-sampling MRI image2Contrast characteristics of the primary full-sampling MRI image are input into a generator to generate a final complete T2Contrast MRI images.
2. The deep learning based multi-contrast MRI image reconstruction method according to claim 1, further comprising: will be a real T1Contrast fully sampled MRI image, full T generated by a generator1The contrast MRI image is input into a discriminator to determine the true T2Contrast fully sampled MRI image, full T generated by a generator2Contrast ratio MRI images are input into the discriminator, and are subjected to countermeasure training to form a reconstruction model so as to improve the final reconstruction function of the generator.
3. The deep learning-based multi-contrast MRI image reconstruction method according to claim 2, characterized in that the deep convolutional neural network Convnet () of step S1 is trained using a gradient descent method.
4. The deep learning-based multi-contrast MRI image reconstruction method according to claim 3, characterized in that the deep convolutional neural network Convnet () maps undersampled MRI images of different contrasts to fully sampled MRI images.
5. The deep learning-based multi-contrast MRI image reconstruction method according to claim 4, wherein T in step S21Contrast undersampled MRI images from true T1Obtaining a contrast full-sampling MRI image through a first processing flow; t is2Contrast undersampled MRI images from true T2Obtaining a contrast full-sampling MRI image through a second processing flow; the process of the first processing flow comprises the following steps:
1) for real T1Fourier transform is carried out on the contrast full-sampling MRI image;
2) for T under K space1Randomly down-sampling the contrast full-sampling MRI image according to the M proportion;
3) replacing the lost Fourier coefficient under the K space after random down-sampling with zero;
4) performing inverse Fourier transform to obtain T1Contrast undersampled MRI images;
the second processing flow comprises the following steps:
A. for real T2Fourier transform is carried out on the contrast full-sampling MRI image;
B. for T under K space2Randomly down-sampling the contrast full-sampling MRI image according to the M proportion;
C. replacing the lost Fourier coefficient under the K space after random down-sampling with zero;
D. performing inverse Fourier transform to obtain T2Contrast undersampled MRI images.
6. The deep learning based multi-contrast MRI image reconstruction method according to claim 5, characterized by a true T1Contrast full-sample MRI images and true T2Contrast full-sample MRI images appear in pairs and are acquired from the same body part.
7. The deep learning based multi-contrast MRI image reconstruction method according to claim 6, characterized in that the under-sampled MRI image xi∈RNSatisfies the following conditions:
yi=G(E(Convnet(xi)))
wherein x isi∈RNThe MRI image is undersampled, can be any one of multiple contrasts, and is input into a deep convolutional neural network Convnet (·); y isi∈RNTo the finally obtained fully sampled MRI image; e (-) represents an encoder for extracting structural features and contrast features from the primary full-sampling MRI image; g (-) is a generator that generates a complete MRI image with the structural features and contrast features as inputs.
8. The deep learning-based multi-contrast MRI image reconstruction method according to claim 9, characterized in that the deep convolutional neural network Convnet () adopts a supervised training mode, and the expression of the loss function is:
Figure FDA0002866648580000021
wherein L ispriA loss function representing a deep convolutional neural network Convnet (·);
Figure FDA0002866648580000031
the method comprises the steps that an i-th real full-sampling MRI image is subjected to down-sampling under K space and zero filling to obtain an under-sampling MRI image of a simulated reality scene;
Figure FDA0002866648580000032
for the ith true full sample TjContrast MRI images; fu(. h) a fourier transform function that converts the MRI image to K-space; d (-) is a random down-sampling function, randomly down-sampled from K space of the complete MRI image according to the M proportion; n is the total number of samples; theta1And theta2Are all loss functions LpriThe weight coefficient of (2).
9. The deep learning-based multi-contrast MRI image reconstruction method according to claim 8, characterized in that the extracted T is1Structural feature and T of contrast primary full-sampling MRI image2After similarity constraint is carried out on the structural features of the primary full-sampling MRI image with the contrast, the structural consistency loss expression is as follows:
Figure FDA0002866648580000033
wherein L iscntRepresents a loss of structural consistency, CiTm∈RNAnd CiTm+1∈RNRespectively represents T1Contrast and T2Structural features of contrast MRI images; n is the total number of samples.
10. The deep learning-based multi-contrast MRI image reconstruction method according to claim 9, wherein the deep convolutional neural network Convnet (), the encoder, the generator and the discriminator constitute a generative countermeasure network, and the loss function of the generative countermeasure network includes a self-encoding loss LateAnd the loss L of the discriminatordisAnd loss of structural consistency Lcnt
The self-coding loss LateThe expression of (a) is:
Figure FDA0002866648580000034
wherein the content of the first and second substances,
Figure FDA0002866648580000035
the method comprises the steps that an i-th real full-sampling MRI image is subjected to down-sampling under K space and zero filling to obtain an under-sampling MRI image of a simulated reality scene; n is the total number of samples;
the discriminator loss LdisThe expression of (a) is:
Figure FDA0002866648580000036
wherein Dis () represents a discriminator;
Figure FDA0002866648580000037
for the ith true full sample TjContrast MRI images;
total loss function L of the reconstructed modeltotalThe expression of (a) is:
Ltotal=ρ1·Lpri+ρ2·Late+ρ3·Ldis+ρ4·Lcnt
where ρ 1, ρ 2, ρ 3, ρ 4 each represent a weight coefficient of the loss function.
CN202011589581.9A 2020-12-28 2020-12-28 Multi-contrast MRI image reconstruction method based on deep learning Active CN112700508B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011589581.9A CN112700508B (en) 2020-12-28 2020-12-28 Multi-contrast MRI image reconstruction method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011589581.9A CN112700508B (en) 2020-12-28 2020-12-28 Multi-contrast MRI image reconstruction method based on deep learning

Publications (2)

Publication Number Publication Date
CN112700508A true CN112700508A (en) 2021-04-23
CN112700508B CN112700508B (en) 2022-04-19

Family

ID=75511768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011589581.9A Active CN112700508B (en) 2020-12-28 2020-12-28 Multi-contrast MRI image reconstruction method based on deep learning

Country Status (1)

Country Link
CN (1) CN112700508B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113971667A (en) * 2021-11-02 2022-01-25 上海可明科技有限公司 Training and optimizing method for target detection model of surgical instrument in storage environment
CN114114116A (en) * 2022-01-27 2022-03-01 南昌大学 Magnetic resonance imaging generation method, system, storage medium and computer equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130279781A1 (en) * 2012-04-19 2013-10-24 The Ohio State University Method for estimating a grappa reconstruction kernel
WO2014033207A1 (en) * 2012-08-29 2014-03-06 Koninklijke Philips N.V. Iterative sense denoising with feedback
CN106372654A (en) * 2016-08-29 2017-02-01 滕忠照 Method for assessing cerebral infarction risk caused by head and neck atherosclerosis plaques
CN107576925A (en) * 2017-08-07 2018-01-12 上海东软医疗科技有限公司 The more contrast image rebuilding methods of magnetic resonance and device
CN108305221A (en) * 2018-01-03 2018-07-20 上海东软医疗科技有限公司 A kind of magnetic resonance parallel imaging method and device
US20190355125A1 (en) * 2018-05-21 2019-11-21 Shanghai United Imaging Healthcare Co., Ltd. System and method for multi-contrast magnetic resonance imaging
US10527699B1 (en) * 2018-08-01 2020-01-07 The Board Of Trustees Of The Leland Stanford Junior University Unsupervised deep learning for multi-channel MRI model estimation
CN111436936A (en) * 2020-04-29 2020-07-24 浙江大学 CT image reconstruction method based on MRI
WO2020219915A1 (en) * 2019-04-24 2020-10-29 University Of Virginia Patent Foundation Denoising magnetic resonance images using unsupervised deep convolutional neural networks

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130279781A1 (en) * 2012-04-19 2013-10-24 The Ohio State University Method for estimating a grappa reconstruction kernel
WO2014033207A1 (en) * 2012-08-29 2014-03-06 Koninklijke Philips N.V. Iterative sense denoising with feedback
CN106372654A (en) * 2016-08-29 2017-02-01 滕忠照 Method for assessing cerebral infarction risk caused by head and neck atherosclerosis plaques
CN107576925A (en) * 2017-08-07 2018-01-12 上海东软医疗科技有限公司 The more contrast image rebuilding methods of magnetic resonance and device
CN108305221A (en) * 2018-01-03 2018-07-20 上海东软医疗科技有限公司 A kind of magnetic resonance parallel imaging method and device
US20190355125A1 (en) * 2018-05-21 2019-11-21 Shanghai United Imaging Healthcare Co., Ltd. System and method for multi-contrast magnetic resonance imaging
US10527699B1 (en) * 2018-08-01 2020-01-07 The Board Of Trustees Of The Leland Stanford Junior University Unsupervised deep learning for multi-channel MRI model estimation
WO2020219915A1 (en) * 2019-04-24 2020-10-29 University Of Virginia Patent Foundation Denoising magnetic resonance images using unsupervised deep convolutional neural networks
CN111436936A (en) * 2020-04-29 2020-07-24 浙江大学 CT image reconstruction method based on MRI

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
DUAN J ET AL: "VS-Net: Variable splitting network for accelerated parallel MRI reconstruction", 《INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION》 *
H JEELANI ET AL: "Image quality affects deep learning reconstruction of MRI", 《2018 IEEE 15TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2018)》 *
I CHATNUNTAWECH ET AL: "Fast reconstruction for accelerated multi-slice multi-contrast MRI", 《2015 IEEE 12TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI)》 *
KOPANOGLU E ET AL: "Simultaneous use of individual and joint regularization terms in compressive sensing: Joint reconstruction of multi‐channel multi‐contrast MRI acquisitions", 《NMR IN BIOMEDICINE》 *
YU LUO ET AL: "A cosparse analysis model with combined redundant systems for MRI reconstruction", 《MEDICAL PHYSICS》 *
ZHANG ET AL: "Multi-channel generative adversarial network for parallel magnetic resonance image reconstruction in k-space", 《INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION》 *
刘红: "基于多通道约束的图像解卷积方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
崔雪红 等: "多路卷积神经网络的轮胎缺陷图像分类", 《计算机工程与设计》 *
袁敏: "基于多尺度几何分析和字典学习的高度欠采样磁共振图像重构研究", 《中国博士学位论文全文数据库 信息科技辑》 *
郑虹 等: "基于局部统计特性的多对比度MRI超分辨重建", 《第十八届全国波谱学学术年会论文集》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113971667A (en) * 2021-11-02 2022-01-25 上海可明科技有限公司 Training and optimizing method for target detection model of surgical instrument in storage environment
CN113971667B (en) * 2021-11-02 2022-06-21 上海可明科技有限公司 Training and optimizing method for target detection model of surgical instrument in storage environment
CN114114116A (en) * 2022-01-27 2022-03-01 南昌大学 Magnetic resonance imaging generation method, system, storage medium and computer equipment
CN114114116B (en) * 2022-01-27 2022-08-23 南昌大学 Magnetic resonance imaging generation method, system, storage medium and computer equipment

Also Published As

Publication number Publication date
CN112700508B (en) 2022-04-19

Similar Documents

Publication Publication Date Title
CN107610194B (en) Magnetic resonance image super-resolution reconstruction method based on multi-scale fusion CNN
CN106780372B (en) A kind of weight nuclear norm magnetic resonance imaging method for reconstructing sparse based on Generalized Tree
Wang et al. Deep learning for fast MR imaging: A review for learning reconstruction from incomplete k-space data
CN113077527B (en) Rapid magnetic resonance image reconstruction method based on undersampling
CN109214989B (en) Single image super resolution ratio reconstruction method based on Orientation Features prediction priori
CN106056647B (en) A kind of magnetic resonance fast imaging method based on the sparse double-deck iterative learning of convolution
CN109674471A (en) A kind of electrical impedance imaging method and system based on generation confrontation network
CN109410289A (en) A kind of high lack sampling hyperpolarized gas lung MRI method for reconstructing of deep learning
CN112700508B (en) Multi-contrast MRI image reconstruction method based on deep learning
CN111861884A (en) Satellite cloud image super-resolution reconstruction method based on deep learning
CN104013403A (en) Three-dimensional heart magnetic resonance imaging method based on tensor composition sparse bound
CN109118428B (en) Image super-resolution reconstruction method based on feature enhancement
CN110942496B (en) Propeller sampling and neural network-based magnetic resonance image reconstruction method and system
CN115578427A (en) Unsupervised single-mode medical image registration method based on deep learning
CN111784792A (en) Rapid magnetic resonance reconstruction system based on double-domain convolution neural network and training method and application thereof
CN106981046B (en) Single image super resolution ratio reconstruction method based on multi-gradient constrained regression
CN113870327B (en) Medical image registration method based on prediction multi-level deformation field
CN117097876B (en) Event camera image reconstruction method based on neural network
CN116993926A (en) Single-view human body three-dimensional reconstruction method
CN114693823B (en) Magnetic resonance image reconstruction method based on space-frequency double-domain parallel reconstruction
CN114882992B (en) Multi-site functional magnetic resonance imaging heterogeneity removing method for predicting diseases
CN112669400B (en) Dynamic MR reconstruction method based on deep learning prediction and residual error framework
CN116797541A (en) Transformer-based lung CT image super-resolution reconstruction method
CN113192151B (en) MRI image reconstruction method based on structural similarity
WO2022193378A1 (en) Image reconstruction model generation method and apparatus, image reconstruction method and apparatus, device, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant