CN111436936A - CT image reconstruction method based on MRI - Google Patents

CT image reconstruction method based on MRI Download PDF

Info

Publication number
CN111436936A
CN111436936A CN202010355883.3A CN202010355883A CN111436936A CN 111436936 A CN111436936 A CN 111436936A CN 202010355883 A CN202010355883 A CN 202010355883A CN 111436936 A CN111436936 A CN 111436936A
Authority
CN
China
Prior art keywords
space data
mri
image
undersampled
offline
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010355883.3A
Other languages
Chinese (zh)
Other versions
CN111436936B (en
Inventor
张鞠成
孙云
饶先成
孙建忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202110770801.6A priority Critical patent/CN113470139A/en
Priority to CN202010355883.3A priority patent/CN111436936B/en
Publication of CN111436936A publication Critical patent/CN111436936A/en
Application granted granted Critical
Publication of CN111436936B publication Critical patent/CN111436936B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5229Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
    • A61B6/5247Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from an ionising-radiation diagnostic technique and a non-ionising radiation diagnostic technique, e.g. X-ray and ultrasound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a CT image reconstruction method based on MRI, which comprises the following steps: 1) reconstructing MRI by using a deep learning network, training the deep learning network, acquiring undersampled k-space data of an object to be detected, and inputting the undersampled k-space data of the object to be detected into the trained deep learning network to acquire on-line MRI of the object to be detected; 2) CT images are reconstructed from MRI using a two-way generating countermeasure network. The invention has the following advantages: (1) MRI imaging speed is high; (2) the application range is wide, and the imaging device can be used for lung imaging and imaging of other parts of a human body; (3) CT images are obtained through MRI reconstruction, and ionizing radiation of CT examination is avoided; (4) the reconstructed CT images can also be used for radiation therapy planning, and PET attenuation correction.

Description

CT image reconstruction method based on MRI
Technical Field
The invention relates to the technical field of medical image processing, in particular to a CT image reconstruction method based on MRI.
Background
The new coronary pneumonia has strong infectivity and high fatality rate, and the early discovery, early diagnosis, early treatment and early isolation are the most effective means of the current prevention and control treatment. Compared with various limitations of nucleic acid examination, CT (computed tomography) examination is timely, accurate, rapid, high in positive rate, and the lung lesion range is closely related to clinical symptoms, so that the CT (computed tomography) examination becomes a main reference basis for early screening and diagnosis of patients with the novel coronavirus pneumonia. According to the novel diagnosis and treatment scheme (trial sixth edition) of coronavirus pneumonia, the new coronavirus pneumonia presents multiple small spot images and interstitial changes at the early stage, and the extrapulmonary zone is obvious. Further, the lung disease develops into a double lung multiple-wear glass shadow and a infiltrative shadow, and the severe cases can cause lung excess change, so that pleural effusion is rare. Patients are subjected to CT examination 2 times in a few cases and 3-4 times in a large number cases from initial evaluation of CT scanning in hospital to understanding of lesion development until curing and discharge. Children and pregnant women are not suitable for CT examinations due to the presence of ionizing radiation. Magnetic Resonance Imaging (MRI) has the advantages of high soft tissue contrast, no ionizing radiation, high resolution, tomography in any direction and the like, and is an important technology in modern medical imaging. MRI usually serves as an important supplement for chest plain film and CT, and is helpful for identifying pathological changes inside and outside the chest, inside and outside the mediastinum, and above and below the diaphragm and knowing the origin of the pathological changes. For the imaging examination of the new coronary pneumonia, compared with the CT, the MRI has the defects of low imaging speed and poor display of the lung microstructure.
Disclosure of Invention
In view of the above, the present invention provides a CT image reconstruction method based on MRI, which has a fast imaging speed and a good display effect on the lung microstructure, for solving the above-mentioned problems of slow MRI imaging speed and poor display on the lung microstructure.
The technical scheme of the invention is to provide a CT image reconstruction method based on MRI, which comprises the following steps:
1) reconstructing an MRI using a deep learning network, comprising the steps of:
acquiring fully sampled offline k-space data of a sample object, wherein the fully sampling means that the k-space data acquisition satisfies the Nyquist sampling theorem, and an image of the sample object can be restored by the fully sampled k-space data, and the offline k-space data means k-space data acquired from a magnetic resonance device;
performing inverse Fourier transform on the fully sampled offline k-space data to obtain fully sampled offline multi-contrast MRI; the multi-contrast MRI refers to scanning with a plurality of imaging sequences to obtain different contrasts, such as T1W, T2W, and the like;
undersampling the fully sampled offline k-space data in k-space to obtain undersampled offline k-space data, wherein undersampled means that the k-space data acquisition does not meet the Nyquist sampling theorem and aliasing artifacts are generated when the undersampled k-space data are directly used for image reconstruction;
training a deep learning network according to the undersampled offline k-space data and the fully sampled offline multi-contrast MRI;
acquiring undersampled k-space data of an object to be detected;
inputting the undersampled k-space data of the object to be detected into a trained deep learning network to obtain an on-line MRI of the object to be detected;
2) reconstructing a CT image from an on-line MRI using a two-way generative countermeasure network; the bidirectional generation countermeasure network is composed of two generators and two discriminators, the first generator GAFor mapping from an on-line MRI to a CT image, a second generator GBFor mapping from a CT image to an on-line MRI, the discriminators include a CT discriminator and an MRI discriminatorCTFor distinguishing by the first generator GAGenerated CT image and real CT image, MRI discriminator DMRIFor distinguishing by the second generator GBGenerated MRI and real MRI; reconstructing a CT image from MRI using a two-way generation countermeasure network includes the following steps:
Respectively acquiring unmarked and unpaired MRI and CT images;
true MRI pass generator GAConversion to generate CT image GA(IMRI);
Generating a CT image GA(IMRI) Re-pass generator GBConversion to reconstructed MRI;
true CT image ICTThrough generator GBConversion to generating an MRI;
generating MRI and then passing through generator GAConversion into reconstructed CT image GA(GB(ICT));
First generator GAAnd a second generator GBThe formed generator network, the CT discriminator and the MRI discriminator form the discriminator network to resist each other and continuously adjust parameters, finally, the discrimination network can not judge whether the output result of the generator network is real or not through optimization, and meanwhile, the reconstruction loss G is minimizedB(GA(IMRI))-IMRII and GA(GB(ICT))-ICT||。
Compared with the prior art, the method has the following advantages that: (1) the imaging speed is high; (2) the application range is wide, and the imaging device can be used for lung imaging and imaging of other parts of a human body; (3) CT images are obtained through MRI reconstruction, and ionizing radiation of CT examination is avoided; (4) the reconstructed CT images can also be used for radiotherapy planning and pet (positron emission tomography) attenuation correction.
As an improvement, in step 2), the bidirectional generation countermeasure network is a Wasserstein bidirectional generation countermeasure network, the Wasserstein distance is used to replace the Jensen-Shannon divergence in the bidirectional generation countermeasure network, and the loss function is: lambda [ alpha ]1||GB(GA(IMRI))-IMRI||+λ2||GA(GB(ICT))-ICT||-DMRI(GB(ICT))-DCT(GA(IMRI) Where λ) is1And λ2The parameters may be selected empirically for regularization.
As an improvement, in step 2), the perceptual loss is increased in a loss function, and the loss function is as follows by using a pre-trained VGG16 network as a feature extractor:
Figure BDA0002473419480000031
wherein λ1、λ2、λ3And λ4The VGG16 network is a classical deep learning model in the task of Image classification, the VGG is a convolutional neural network model proposed by Simony and Zisserman in the document VeryDeepConvolitional Networks for L Image Recognition, the name of which is derived from the abbreviation of the Oxford university Visual Geometry Group (Visual Geometry Group) where the author of the paper is located, and the model participates in the 2014 ImageNet Image classification and localization challenge race, achieves superior performance, the second on the classification task and the first on the localization task.
As an improvement, the reconstructing MRI by using the deep learning network in step 1) includes:
acquiring fully sampled off-line k-space data y of a sample object0
For the fully sampled offline k-space data y0Performing inverse Fourier transform to obtain fully sampled offline multi-contrast magnetic resonance image x0
K-space data y under a line of k-space for said full sampling0Undersampling to obtain undersampled offline k-space data y1
For the undersampled offline k-space data y1High-pass filtering to obtain y1*h;
From the high-pass filtered undersampled offline k-space data y1H and the fully sampled under-line multi-contrast magnetic resonance image x0Training a deep learning network;
obtaining undersampled k-space data y of an object to be measured2
For the undersampled k-space data y2High-pass filtering to obtain y2*h;
The undersampled k-space data y after the high-pass filtering of the object to be detected2Inputting h into the trained deep learning network to obtain k space data y after k space filling2’*h;
Carrying out inverse high-pass filtering on the k space data to obtain reconstructed k space data y2’;
For the reconstructed k-space data y2' inverse Fourier transform to obtain an in-line magnetic resonance image.
As an improvement, the reconstructing MRI by using the deep learning network in step 1) includes:
acquiring fully sampled multi-channel offline k-space data y of a sample object0
For the fully sampled multi-channel offline k-space data y0Performing inverse Fourier transform to obtain fully sampled multi-channel offline multi-contrast magnetic resonance image x0
K-space data y under k-space versus the fully sampled multi-channel line0Undersampling to obtain undersampled multi-channel offline k-space data y1
For the undersampled multi-channel offline k-space data y1High-pass filtering to obtain y1*h;
From the high-pass filtered undersampled multi-channel offline k-space data y1H and the fully sampled multi-channel offline multi-contrast magnetic resonance image x0Training a deep learning network, and respectively arranging the deep learning networks before and after parallel imaging as shown in FIG. 3;
the k space data after two deep learning networks and parallel imaging processing is y1' h, pair y1' inverse high-pass filtering and data consistency correction are carried out, and k space data at corresponding positions are replaced by k space data obtained by sampling, so that the deep learning network is ensured to be only filled with the k space data which are not sampled;
performing inverse Fourier transform and root mean square operation on the k space data after data consistency correction to obtain a final offline reconstructed magnetic resonance image;
acquiring multi-channel undersampled k-space data y of object to be detected2
For the multi-channel undersampled k-space data y2High-pass filtering to obtain y2*h;
High-pass filtering the multi-channel undersampled k-space data y of the object to be detected2Inputting h into the trained deep learning network to obtain k space data y after k space filling2’*h;
Carrying out inverse high-pass filtering on the k space data to obtain reconstructed k space data y2’;
For the reconstructed k-space data y2' inverse Fourier transform and root mean square operations are performed to obtain an in-line magnetic resonance image.
As an improvement, the parallel imaging method is one of GRAPPA or SPIRiT.
As an improvement, the deep learning network in the step 1) is composed of a k-space domain U-Net and an image domain U-Net, undersampled off-line k-space data is firstly input into the k-space domain U-Net, then data consistency correction is carried out, a magnetic resonance image is obtained through inverse Fourier transform, the image domain U-Net is input, then Fourier transform is carried out to obtain k-space data, and data consistency correction is carried out; the data consistency correction ensures that the deep learning network only fills the non-sampled k-space data by replacing the k-space data at the corresponding position with the k-space data obtained by sampling.
Drawings
FIG. 1 is a flow chart of the present invention for reconstructing MRI using a deep learning network;
FIG. 2 is a schematic diagram of the present invention for reconstructing CT images from MRI using a generation countermeasure network;
FIG. 3 is a flowchart of reconstructing an MRI using a deep learning network according to an embodiment of the present invention;
fig. 4 is a configuration diagram of a deep learning network according to an embodiment of the present invention.
1-MRI space, 2-CT space, 31-Generator G for generating CT by MRIA32-Generator G for generating MRI from CTB41-MRI discriminator, 42-CT discriminator, 11-true MRI image, 12-Generation CT imageImage, 13-reconstructed MRI images; 21-true CT image, 22-generation of MRI image, 23-reconstruction of CT image.
Detailed Description
The invention will be further described with reference to the following drawings and specific examples, but the invention is not limited to these examples. The invention is intended to cover alternatives, modifications, equivalents, and alternatives that may be included within the spirit and scope of the invention. In the following description of the preferred embodiments of the present invention, specific details are set forth in order to provide a thorough understanding of the present invention, and it will be apparent to those skilled in the art that the present invention may be practiced without these specific details.
Compared with the traditional MRI image reconstruction technology, the image reconstruction method based on the deep learning has great potential in shortening the magnetic resonance imaging scanning time, accelerating the imaging speed and improving the imaging quality. FIG. 1 is a flow chart of the present invention for reconstructing MRI using a deep learning network. In the online lower training process, the undersampled offline k-space data and the fully sampled offline multi-contrast magnetic resonance image are used for training the deep learning network, and the reconstructed k-space data obtained by the deep learning network is subjected to inverse Fourier transform to obtain a reconstructed magnetic resonance image. In the on-line testing process, under-sampled k-space data of an object to be tested is input into the deep learning network, reconstructed k-space data is output, and a reconstructed magnetic resonance image is obtained through inverse Fourier transform. The k space is a Fourier dual space of a rectangular coordinate body space, namely a Fourier transform frequency space, and is mainly applied to the field of magnetic resonance imaging.
Optionally, the multi-contrast images include a T1 weighted image, a T2 weighted image, and a proton density image, and the fields of view and matrix sizes of the multi-contrast images are the same. Wherein the T1 weighted image mainly highlights longitudinal relaxation differences of tissues in the sample object, minimizing the influence of other properties of tissues such as transverse relaxation on the image. The T2 weighted image highlights mainly the difference in transverse relaxation of the tissue in the sample object. The proton density image reflects mainly differences in the proton content of the tissue in the sample object.
FIG. 2 is a diagram of the present invention utilizing a generative countermeasure network route MThe RI reconstructs a schematic diagram of the CT image. 1-MRI space, 2-CT space, 31-Generator G for generating CT by MRIA32-Generator G for generating MRI from CTB41-MRI discriminator, 42-CT discriminator, 11-real MRI image, 12-generation CT image, 13-reconstruction MRI image; 21-true CT image, 22-generation of MRI image, 23-reconstruction of CT image.
According to one embodiment, unlabeled unpaired MRI and CT images are acquired separately;
true MRI image IMRIThrough generator GAConversion to generate CT image GA(IMRI);
Generating a CT image GA(IMRI) Re-pass generator GBConversion into reconstructed MRI image GB(GA(IMRI));
Similarly, the real CT image ICTThrough generator GBConversion to produce an MRI image GB(ICT);
Generating an MRI image GB(ICT) Re-pass generator GAConversion into reconstructed CT image GA(GB(ICT));
The generator network and the discriminator network mutually confront each other, continuously adjust parameters, and finally optimize so that the discriminator network cannot judge whether the output result of the generator network is real or not.
When on-line testing, MRI image is input and passes through generator GAAnd obtaining a corresponding CT image.
According to one embodiment, during online testing, undersampled k-space data of an object to be tested are input and input, a reconstructed magnetic resonance image is obtained through a magnetic resonance image reconstruction depth learning network, the reconstructed magnetic resonance image is input to a generation countermeasure network, and a final CT image is obtained.
According to one embodiment, a CT image reconstruction method includes the steps of:
acquiring fully sampled off-line k-space data y of a sample object0
For the fully sampled offline k-space data y0Performing inverse Fourier transform to obtain fully sampled offline multi-contrastMagnetic resonance image x0
K-space data y under a line of k-space for said full sampling0Undersampling to obtain undersampled offline k-space data y1
For the undersampled offline k-space data y1High-pass filtering to obtain y1*h;
From the high-pass filtered undersampled offline k-space data y1H and the fully sampled under-line multi-contrast magnetic resonance image x0Training a deep learning network;
obtaining undersampled k-space data y of an object to be measured2
For the undersampled k-space data y2High-pass filtering to obtain y2*h;
The undersampled k-space data y after the high-pass filtering of the object to be detected2Inputting h into the trained deep learning network to obtain k space data y after k space filling2’*h;
Carrying out inverse high-pass filtering on the k space data to obtain reconstructed k space data y2’;
For the reconstructed k-space data y2' inverse Fourier transform to obtain an in-line magnetic resonance image.
And inputting the on-line magnetic resonance image into a generation countermeasure network to obtain a final CT image.
According to one embodiment, for multi-channel magnetic resonance imaging, the MRI imaging step combining parallel imaging and deep learning comprises:
acquiring fully sampled multi-channel offline k-space data y of a sample object0
For the fully sampled multi-channel offline k-space data y0Performing inverse Fourier transform to obtain fully sampled multi-channel offline multi-contrast magnetic resonance image x0
K-space data y under k-space versus the fully sampled multi-channel line0Undersampling to obtain undersampled multi-channel offline k-space data y1
Undersampling theOf the multi-channel line k-space data y1High-pass filtering to obtain y1*h;
From the high-pass filtered undersampled multi-channel offline k-space data y1H and the fully sampled multi-channel offline multi-contrast magnetic resonance image x0Training a deep learning network, and respectively arranging the deep learning networks before and after parallel imaging as shown in FIG. 3;
the k space data after two deep learning networks and parallel imaging processing is y1' h, pair y1' inverse high-pass filtering and data consistency correction are carried out, and k space data at corresponding positions are replaced by k space data obtained by sampling, so that the deep learning network is ensured to be only filled with the k space data which are not sampled;
performing inverse Fourier transform and root mean square operation on the k space data after data consistency correction to obtain a final offline reconstructed magnetic resonance image;
acquiring multi-channel undersampled k-space data y of object to be detected2
For the multi-channel undersampled k-space data y2High-pass filtering to obtain y2*h;
High-pass filtering the multi-channel undersampled k-space data y of the object to be detected2Inputting h into the trained deep learning network to obtain k space data y after k space filling2’*h;
Carrying out inverse high-pass filtering on the k space data to obtain reconstructed k space data y2’;
For the reconstructed k-space data y2' inverse Fourier transform and root mean square operations are performed to obtain an in-line magnetic resonance image.
And inputting the on-line magnetic resonance image into a generation countermeasure network to obtain a final CT image.
According to one embodiment, as shown in fig. 4, the deep learning network includes a k-space domain U-Net and an image domain U-Net, the undersampled off-line k-space data is firstly input into the k-space domain U-Net, then data consistency correction is performed, a magnetic resonance image is obtained through inverse fourier transform, the image domain U-Net is input, then fourier transform is performed to obtain k-space data, and data consistency correction is performed; the data consistency correction ensures that the deep learning network only fills the non-sampled k-space data by replacing the k-space data at the corresponding position with the k-space data obtained by sampling.
An MRI-based CT image reconstruction system can be formed based on the reconstruction method.
The foregoing is illustrative of the preferred embodiments of the present invention only and is not to be construed as limiting the claims. The present invention is not limited to the above embodiments, and the specific structure thereof is allowed to vary. In general, all changes which come within the scope of the invention as defined by the independent claims are intended to be embraced therein.

Claims (7)

1. An MRI-based CT image reconstruction method, comprising the steps of:
1) reconstructing an MRI using a deep learning network, comprising the steps of:
acquiring fully sampled offline k-space data of a sample object, wherein the fully sampling means that the k-space data acquisition satisfies the Nyquist sampling theorem, and an image of the sample object can be restored by the fully sampled k-space data, and the offline k-space data means k-space data acquired from a magnetic resonance device;
performing inverse Fourier transform on the fully sampled offline k-space data to obtain fully sampled offline multi-contrast MRI, wherein the multi-contrast MRI refers to scanning by using multiple imaging sequences to obtain different contrasts;
undersampling the fully sampled offline k-space data in k-space to obtain undersampled offline k-space data, wherein undersampled means that the k-space data acquisition does not meet the Nyquist sampling theorem and aliasing artifacts are generated when the undersampled k-space data are directly used for image reconstruction;
training a deep learning network according to the undersampled offline k-space data and the fully sampled offline multi-contrast MRI;
acquiring undersampled k-space data of an object to be detected;
inputting the undersampled k-space data of the object to be detected into a trained deep learning network to obtain an on-line MRI of the object to be detected;
2) reconstructing a CT image from an on-line MRI using a two-way generative countermeasure network; the bidirectional generation countermeasure network is composed of two generators and two discriminators, the first generator GAFor mapping from an on-line MRI to a CT image, a second generator GBFor mapping from a CT image to an on-line MRI, the discriminators include a CT discriminator and an MRI discriminatorCTFor distinguishing by the first generator GAGenerated CT image and real CT image, MRI discriminator DMRIFor distinguishing by the second generator GBGenerated MRI and real MRI; reconstructing a CT image from MRI using a bi-directional generation countermeasure network includes the steps of:
respectively acquiring unmarked and unpaired MRI and CT images;
true MRI pass generator GAConversion to generate CT image GA(IMRI);
Generating a CT image GA(IMRI) Re-pass generator GBConversion to reconstructed MRI;
true CT image ICTThrough generator GBConversion to generating an MRI;
generating MRI and then passing through generator GAConversion into reconstructed CT image GA(GB(ICT));
First generator GAAnd a second generator GBThe formed generator network, the CT discriminator and the MRI discriminator form the discriminator network to resist each other and continuously adjust parameters, finally, the discrimination network can not judge whether the output result of the generator network is real or not through optimization, and meanwhile, the reconstruction loss G is minimizedB(GA(IMRI))-IMRII and GA(GB(ICT))-ICT||。
2. The MRI-based CT image reconstruction method of claim 1, wherein: in step 2), the bidirectional generation countermeasure network is a Wasserstein bidirectional generation countermeasure network, Wasserstein distance is used for replacing Jensen-Shannon divergence in the bidirectional generation countermeasure network, and the loss function is as follows:
λ1||GB(GA(IMRI))-IMRI||+λ2||GA(GB(ICT))-ICT||-DMRI(GB(ICT))-DCT(GA(IMRI) Where λ) is1And λ2Is a regularization parameter.
3. The MRI-based CT image reconstruction method according to claim 2, wherein: in step 2), adding the perceptual loss in a loss function, and using a pre-trained VGG16 network as a feature extractor, wherein the loss function is as follows:
Figure FDA0002473419470000021
wherein λ1、λ2、λ3And λ4Is a regularization parameter.
4. The MRI-based CT image reconstruction method of claim 1, wherein: the step 1) of reconstructing MRI by using the deep learning network comprises the following steps:
acquiring fully sampled off-line k-space data y of a sample object0
For the fully sampled offline k-space data y0Performing inverse Fourier transform to obtain fully sampled offline multi-contrast magnetic resonance image x0
K-space data y under a line of k-space for said full sampling0Undersampling to obtain undersampled offline k-space data y1
For the undersampled offline k-space data y1High-pass filtering to obtain y1*h;
From the high-pass filtered undersampled offline k-space data y1H and the fully sampled under-line multi-contrast magnetic resonance image x0Training a deep learning network;
obtaining undersampled k-space data y of an object to be measured2
For the undersampled k-space data y2High-pass filtering to obtain y2*h;
The undersampled k-space data y after the high-pass filtering of the object to be detected2Inputting h into the trained deep learning network to obtain k space data y after k space filling2’*h;
Carrying out inverse high-pass filtering on the k space data to obtain reconstructed k space data y2’;
For the reconstructed k-space data y2' inverse Fourier transform to obtain an in-line magnetic resonance image.
5. The MRI-based CT image reconstruction method according to claim 1 or 4, wherein the reconstructing MRI using the deep learning network in the step 1) comprises:
acquiring fully sampled multi-channel offline k-space data y of a sample object0
For the fully sampled multi-channel offline k-space data y0Performing inverse Fourier transform to obtain fully sampled multi-channel offline multi-contrast magnetic resonance image x0
K-space data y under k-space versus the fully sampled multi-channel line0Undersampling to obtain undersampled multi-channel offline k-space data y1
For the undersampled multi-channel offline k-space data y1High-pass filtering to obtain y1*h;
From the high-pass filtered undersampled multi-channel offline k-space data y1H and the fully sampled multi-channel offline multi-contrast magnetic resonance image x0Training a deep learning network, and respectively arranging the deep learning networks before and after parallel imaging as shown in FIG. 3;
the k space data after two deep learning networks and parallel imaging processing is y1' h, pair y1' inverse high-pass filtering and data consistency correction are carried out, and k space data at corresponding positions are replaced by k space data obtained by sampling, so that the deep learning network is ensured to be only filled with the k space data which are not sampled;
performing inverse Fourier transform and root mean square operation on the k space data after data consistency correction to obtain a final offline reconstructed magnetic resonance image;
acquiring multi-channel undersampled k-space data y of object to be detected2
For the multi-channel undersampled k-space data y2High-pass filtering to obtain y2*h;
High-pass filtering the multi-channel undersampled k-space data y of the object to be detected2Inputting h into the trained deep learning network to obtain k space data y after k space filling2’*h;
Carrying out inverse high-pass filtering on the k space data to obtain reconstructed k space data y2’;
For the reconstructed k-space data y2' inverse Fourier transform and root mean square operations are performed to obtain an in-line magnetic resonance image.
6. The MRI-based CT image reconstruction method of claim 5, wherein: the parallel imaging method is one of GRAPPA or SPIRiT.
7. The MRI-based CT image reconstruction method according to any one of claims 1 to 3, wherein: the deep learning network in the step 1) is composed of a k-space domain U-Net and an image domain U-Net, undersampled off-line k-space data is firstly input into the k-space domain U-Net, then data consistency correction is carried out, a magnetic resonance image is obtained through inverse Fourier transformation, the image domain U-Net is input, then Fourier transformation is carried out to obtain k-space data, and data consistency correction is carried out; the data consistency correction ensures that the deep learning network only fills the non-sampled k-space data by replacing the k-space data at the corresponding position with the k-space data obtained by sampling.
CN202010355883.3A 2020-04-29 2020-04-29 CT image reconstruction method based on MRI Active CN111436936B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110770801.6A CN113470139A (en) 2020-04-29 2020-04-29 CT image reconstruction method based on MRI
CN202010355883.3A CN111436936B (en) 2020-04-29 2020-04-29 CT image reconstruction method based on MRI

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010355883.3A CN111436936B (en) 2020-04-29 2020-04-29 CT image reconstruction method based on MRI

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202110770801.6A Division CN113470139A (en) 2020-04-29 2020-04-29 CT image reconstruction method based on MRI

Publications (2)

Publication Number Publication Date
CN111436936A true CN111436936A (en) 2020-07-24
CN111436936B CN111436936B (en) 2021-07-27

Family

ID=71657717

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010355883.3A Active CN111436936B (en) 2020-04-29 2020-04-29 CT image reconstruction method based on MRI
CN202110770801.6A Pending CN113470139A (en) 2020-04-29 2020-04-29 CT image reconstruction method based on MRI

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202110770801.6A Pending CN113470139A (en) 2020-04-29 2020-04-29 CT image reconstruction method based on MRI

Country Status (1)

Country Link
CN (2) CN111436936B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112150568A (en) * 2020-09-16 2020-12-29 浙江大学 Magnetic resonance fingerprint imaging reconstruction method based on Transformer model
CN112700508A (en) * 2020-12-28 2021-04-23 广东工业大学 Multi-contrast MRI image reconstruction method based on deep learning
CN112862738A (en) * 2021-04-09 2021-05-28 福建自贸试验区厦门片区Manteia数据科技有限公司 Multi-modal image synthesis method and device, storage medium and processor
CN113470139A (en) * 2020-04-29 2021-10-01 浙江大学 CT image reconstruction method based on MRI

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549681A (en) * 2022-02-25 2022-05-27 清华大学 Image generation method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106970343A (en) * 2017-04-11 2017-07-21 深圳先进技术研究院 A kind of MR imaging method and device
US20190066281A1 (en) * 2017-08-24 2019-02-28 Siemens Healthcare Gmbh Synthesizing and Segmenting Cross-Domain Medical Images
CN110084863A (en) * 2019-04-25 2019-08-02 中山大学 A kind of multiple domain image conversion method and system based on generation confrontation network
CN110503654A (en) * 2019-08-01 2019-11-26 中国科学院深圳先进技术研究院 A kind of medical image cutting method, system and electronic equipment based on generation confrontation network
CN110689561A (en) * 2019-09-18 2020-01-14 中山大学 Conversion method, system and medium of multi-modal MRI and multi-modal CT based on modular GAN

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110074813B (en) * 2019-04-26 2022-03-04 深圳大学 Ultrasonic image reconstruction method and system
CN110270015B (en) * 2019-05-08 2021-03-09 中国科学技术大学 sCT generation method based on multi-sequence MRI
CN110827369B (en) * 2019-10-31 2023-09-26 上海联影智能医疗科技有限公司 Undersampling model generation method, image reconstruction method, apparatus and storage medium
CN111047660B (en) * 2019-11-20 2022-01-28 深圳先进技术研究院 Image reconstruction method, device, equipment and storage medium
CN110992440B (en) * 2019-12-10 2023-04-21 中国科学院深圳先进技术研究院 Weak supervision magnetic resonance rapid imaging method and device
CN111436936B (en) * 2020-04-29 2021-07-27 浙江大学 CT image reconstruction method based on MRI

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106970343A (en) * 2017-04-11 2017-07-21 深圳先进技术研究院 A kind of MR imaging method and device
US20190066281A1 (en) * 2017-08-24 2019-02-28 Siemens Healthcare Gmbh Synthesizing and Segmenting Cross-Domain Medical Images
CN110084863A (en) * 2019-04-25 2019-08-02 中山大学 A kind of multiple domain image conversion method and system based on generation confrontation network
CN110503654A (en) * 2019-08-01 2019-11-26 中国科学院深圳先进技术研究院 A kind of medical image cutting method, system and electronic equipment based on generation confrontation network
CN110689561A (en) * 2019-09-18 2020-01-14 中山大学 Conversion method, system and medium of multi-modal MRI and multi-modal CT based on modular GAN

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470139A (en) * 2020-04-29 2021-10-01 浙江大学 CT image reconstruction method based on MRI
CN112150568A (en) * 2020-09-16 2020-12-29 浙江大学 Magnetic resonance fingerprint imaging reconstruction method based on Transformer model
CN112700508A (en) * 2020-12-28 2021-04-23 广东工业大学 Multi-contrast MRI image reconstruction method based on deep learning
CN112700508B (en) * 2020-12-28 2022-04-19 广东工业大学 Multi-contrast MRI image reconstruction method based on deep learning
CN112862738A (en) * 2021-04-09 2021-05-28 福建自贸试验区厦门片区Manteia数据科技有限公司 Multi-modal image synthesis method and device, storage medium and processor
CN112862738B (en) * 2021-04-09 2024-01-16 福建自贸试验区厦门片区Manteia数据科技有限公司 Method and device for synthesizing multi-mode image, storage medium and processor

Also Published As

Publication number Publication date
CN113470139A (en) 2021-10-01
CN111436936B (en) 2021-07-27

Similar Documents

Publication Publication Date Title
CN111436936B (en) CT image reconstruction method based on MRI
US10852376B2 (en) Magnetic resonance imaging method and device
CN110476075B (en) Selection of a magnetic resonance fingerprinting dictionary for an anatomical region
US12000918B2 (en) Systems and methods of reconstructing magnetic resonance images using deep learning
Zhang et al. Can signal-to-noise ratio perform as a baseline indicator for medical image quality assessment
DE112016004907T5 (en) Virtual CT images from magnetic resonance images
CN111951344A (en) Magnetic resonance image reconstruction method based on cascade parallel convolution network
CN104220898A (en) Method for generating PET absorption-corrected image from MR image and computer program
Ikeda et al. Compressed sensing and parallel imaging accelerated T2 FSE sequence for head and neck MR imaging: comparison of its utility in routine clinical practice
CN110270015A (en) A kind of sCT generation method based on multisequencing MRI
Xiao et al. Highly and adaptively undersampling pattern for pulmonary hyperpolarized 129 Xe dynamic MRI
CN109920017A (en) The parallel MR imaging reconstructing method of the full variation Lp pseudonorm of joint from consistency based on feature vector
JP6730995B2 (en) Method and system for generating MR image of moving object in environment
US20230186532A1 (en) Correction of magnetic resonance images using multiple magnetic resonance imaging system configurations
CN112489150B (en) Multi-scale sequential training method of deep neural network for rapid MRI
CN113192150B (en) Magnetic resonance interventional image reconstruction method based on cyclic neural network
CN114236444A (en) Hyperpolarized gas lung variable sampling rate rapid magnetic resonance diffusion weighted imaging method
US10908247B2 (en) System and method for texture analysis in magnetic resonance fingerprinting (MRF)
CN113052840A (en) Processing method based on low signal-to-noise ratio PET image
Segers et al. Discrete tomography in MRI: a simulation study
CN114097041A (en) Uncertainty map for deep learning electrical characteristic tomography
Liu et al. Accounting For Inter-Subject Variations in Deep Learning for Reduced-Dose Studies in Cardiac SPECT
CN113341357B (en) vBM3 d-based magnetic resonance image reconstruction method
Lajous et al. Simulated Half-Fourier Acquisitions Single-shot Turbo Spin Echo (HASTE) of the Fetal Brain: Application to Super-Resolution Reconstruction
CN118330529A (en) Quick magnetic resonance imaging and image reconstruction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant