WO2019074938A1 - Contrast dose reduction for medical imaging using deep learning - Google Patents
Contrast dose reduction for medical imaging using deep learning Download PDFInfo
- Publication number
- WO2019074938A1 WO2019074938A1 PCT/US2018/055034 US2018055034W WO2019074938A1 WO 2019074938 A1 WO2019074938 A1 WO 2019074938A1 US 2018055034 W US2018055034 W US 2018055034W WO 2019074938 A1 WO2019074938 A1 WO 2019074938A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- contrast
- images
- image
- dose
- full
- Prior art date
Links
- 238000002059 diagnostic imaging Methods 0.000 title claims abstract description 32
- 238000013135 deep learning Methods 0.000 title claims abstract description 25
- 230000009467 reduction Effects 0.000 title description 4
- 239000002872 contrast media Substances 0.000 claims abstract description 58
- 238000000034 method Methods 0.000 claims abstract description 45
- 238000012549 training Methods 0.000 claims abstract description 23
- 230000006870 function Effects 0.000 claims description 23
- 238000002595 magnetic resonance imaging Methods 0.000 claims description 23
- 238000003384 imaging method Methods 0.000 claims description 16
- 238000002591 computed tomography Methods 0.000 claims description 15
- 238000007781 pre-processing Methods 0.000 claims description 12
- 238000013527 convolutional neural network Methods 0.000 claims description 10
- 238000002583 angiography Methods 0.000 claims description 5
- 238000002594 fluoroscopy Methods 0.000 claims description 5
- 229910052688 Gadolinium Inorganic materials 0.000 claims description 4
- UIWYJDYFSGRHKR-UHFFFAOYSA-N gadolinium atom Chemical compound [Gd] UIWYJDYFSGRHKR-UHFFFAOYSA-N 0.000 claims description 4
- 239000000203 mixture Substances 0.000 claims description 3
- 238000012285 ultrasound imaging Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 description 12
- 238000012360 testing method Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 230000002708 enhancing effect Effects 0.000 description 4
- 230000003902 lesion Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000001629 suppression Effects 0.000 description 4
- 238000002604 ultrasonography Methods 0.000 description 4
- 238000003745 diagnosis Methods 0.000 description 3
- 206010061289 metastatic neoplasm Diseases 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 230000007170 pathology Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 208000003174 Brain Neoplasms Diseases 0.000 description 2
- 206010049977 Intracranial hypotension Diseases 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 2
- 210000003484 anatomy Anatomy 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 238000013136 deep learning model Methods 0.000 description 2
- 238000007917 intracranial administration Methods 0.000 description 2
- 230000001105 regulatory effect Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 210000001519 tissue Anatomy 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- ZCYVEMRRCGMTRW-UHFFFAOYSA-N 7553-56-2 Chemical compound [I] ZCYVEMRRCGMTRW-UHFFFAOYSA-N 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000001638 cerebellum Anatomy 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013434 data augmentation Methods 0.000 description 1
- JXSJBGJIGXNWCI-UHFFFAOYSA-N diethyl 2-[(dimethoxyphosphorothioyl)thio]succinate Chemical compound CCOC(=O)CC(SP(=S)(OC)OC)C(=O)OCC JXSJBGJIGXNWCI-UHFFFAOYSA-N 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- OCDAWJYGVOLXGZ-VPVMAENOSA-K gadobenate dimeglumine Chemical compound [Gd+3].CNC[C@H](O)[C@@H](O)[C@H](O)[C@H](O)CO.CNC[C@H](O)[C@@H](O)[C@H](O)[C@H](O)CO.OC(=O)CN(CC([O-])=O)CCN(CC([O-])=O)CCN(CC(O)=O)C(C([O-])=O)COCC1=CC=CC=C1 OCDAWJYGVOLXGZ-VPVMAENOSA-K 0.000 description 1
- 229940096814 gadobenate dimeglumine Drugs 0.000 description 1
- 238000002075 inversion recovery Methods 0.000 description 1
- 229910052740 iodine Inorganic materials 0.000 description 1
- 239000011630 iodine Substances 0.000 description 1
- 210000004185 liver Anatomy 0.000 description 1
- 230000005291 magnetic effect Effects 0.000 description 1
- 230000001394 metastastic effect Effects 0.000 description 1
- 238000002610 neuroimaging Methods 0.000 description 1
- 230000005298 paramagnetic effect Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 239000000523 sample Substances 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 210000004761 scalp Anatomy 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 231100000331 toxic Toxicity 0.000 description 1
- 230000002588 toxic effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 230000036642 wellbeing Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/60—Rotation of whole images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/60—Image enhancement or restoration using machine learning, e.g. neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10116—X-ray image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10116—X-ray image
- G06T2207/10121—Fluoroscopy
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
Definitions
- This invention relates generally to medical diagnostic imaging. More specifically, it relates to imaging techniques that use contrast agents.
- contrast agents to enhance the visualization of normal and abnormal structures. Examples include conventional angiography, fluoroscopy, computed tomography (CT), ultrasound, and magnetic resonance imaging (MRI). It is often desirable to reduce the contrast agent dose in order to increase the safety of these agents. However, reduced dose reduces desired imaging enhancements, so this has not been possible.
- MRI is a powerful imaging technique providing unique information to distinguish different soft tissues and pathologies. Magnetic contrast agents with unique relaxation parameters are often administered to further boost the visibility of pathology and delineation of lesions.
- Gadolinium based contrast agents are widely used in MRI exams because of their paramagnetic properties, for applications such as angiography, neuroimaging and liver imaging.
- CE-MRI contrast enhanced MRI
- the present invention provides techniques to enhance image quality of diagnostic imaging modalities using a lower dose of contrast than is currently possible. This enables new opportunities for improving the value of medical imaging.
- the techniques are generally applicable to a variety of diagnostic imaging techniques including angiography, fluoroscopy, computed tomography (CT), and ultrasound.
- the techniques of the present invention are able to predict a synthesized full-dose contrast agent image from a low-dose contrast agent image and a pre-dose image.
- the low dose may be any fraction of the full dose, but is preferably 1/10 or less of the full dose.
- Significantly, naively amplifying the contrast enhancement of a 1/10 low-dose CE-MRI by a factor of ten results in poor image quality with widespread noise and ambiguous structures.
- the techniques of the present invention remarkably are able to recover the full contrast signal and generate predicted full-contrast images with high diagnostic quality.
- Embodiments of the invention use a deep learning network, such as a Convolutional Neural Network for image-to-image regression, with a pre-contrast image and low-contrast image as input, and with a predicted full-contrast image as output.
- a residual learning approach is preferably used in prediction.
- the method includes preprocessing to co-register and normalize between different images so they are directly comparable. This step is important since there are arbitrary different acquisition and scaling factor for each scan. Preferably, the average signal is used for normalization.
- the preprocessed images are then used to train the deep learning network to predict the full-contrast image from the pre-contrast and low-contrast images.
- the trained network is then used to synthesize full-contrast images from clinical scans of pre-contrast and low-contrast images.
- the techniques are generally applicable for any diagnostic imaging modality that uses a contrast agent. For example, imaging applications would include fluoroscopy, MRI, CT, and ultrasound.
- the invention provides a method for training a diagnostic imaging device to perform medical diagnostic imaging with reduced contrast agent dose.
- the method includes a) performing diagnostic imaging of a set of subjects to produce a set of images comprising, for each subject of the set of subjects, i) a full-contrast image acquired with a full contrast agent dose administered to the subject, ii) a low-contrast image acquired with a low contrast agent dose administered to the subject, where the low contrast agent dose is less than the full contrast agent dose, and iii) a zero-contrast image acquired with no contrast agent dose administered to the subject; b) pre-processing the set of images to co-register and normalize the set of images to adjust for acquisition and scaling differences between different scans; and c) training a deep learning network (DLN) with the pre-processed set of images by applying zero-contrast images from the set of images and low-contrast images from the set of images as input to the DLN and using a cost function to compare the output of the DLN
- the invention provides a method for medical diagnostic imaging with reduced contrast agent dose.
- the method includes a) performing diagnostic imaging of a subject to produce a low-contrast image acquired with a low contrast agent dose administered to the subject, where the low contrast agent dose is less than a full contrast agent dose, and a zero- contrast image acquired with no contrast agent dose administered to the subject; b) preprocessing the low-contrast image and zero-contrast image to co-register and normalize the images to adjust for acquisition and scaling differences; and c) applying the low-contrast image and the zero-contrast image as input to a deep learning network (DLN) to generate as output of the DLN a synthesized full-dose contrast agent image of the subject; where the DLN has been trained by applying zero-contrast images and low-contrast images as input and full-contrast images as reference ground-truth images.
- DLN deep learning network
- the low contrast agent dose is preferably less than 10% of a full contrast agent dose.
- the diagnostic imaging may be angiography, fluoroscopy, computed tomography (CT), ultrasound, or magnetic resonance imaging.
- performing diagnostic imaging may include performing magnetic resonance imaging where the full contrast agent dose is at most 0.1 mmol/kg Gadolinium MRI contrast.
- the DLN is an encoder-decoder convolutional neural network (CNN) including bypass concatenate connections and residual connections.
- CNN encoder-decoder convolutional neural network
- the invention provides a method for medical diagnostic imaging including a) performing diagnostic imaging of a subject to produce a first image acquired with a first image acquisition sequence and a second image acquired with a second image acquisition sequence distinct from the first image acquisition sequence, where zero contrast agent dose is administered during the diagnostic imaging; b) pre-processing the first image and the second image to co-register and normalize the images to adjust for acquisition and scaling differences; c) applying the first image and the second image as input to a deep learning network (DLN) to generate as output of the DLN a synthesized full-dose contrast agent image of the subject; wherein the DLN has been trained by applying zero-contrast images with different imaging sequences as input and full-contrast images acquired with a full contrast agent dose as reference ground-truth images.
- DLN deep learning network
- FIG. 1 is a flow chart showing a processing pipeline according to an embodiment of the invention.
- FIG. 2 illustrates a workflow of a protocol and procedure for acquisition of images used for training according to an embodiment of the invention.
- FIG. 3 is a schematic overview of an image processing flow according to an embodiment of the invention.
- FIG. 4 is an illustration of a signal model to synthesize full-dose contrast-enhanced MRI image according to an embodiment of the invention.
- FIG. 5 shows a detailed deep learning (DL) model architecture according to an embodiment of the invention.
- FIG. 6 is a set of images of a patient with intracranial metastatic disease, showing the predicted image synthesized from pre-contrast image and low-dose image, compared with a full-dose image according to an embodiment of the invention.
- DL deep learning
- FIG. 7 is a set of images of a patient with a brain neoplasm, showing a synthesized full-dose image predicted from a pre-contrast image and low-dose image, compared with a full-dose image according to an embodiment of the invention.
- FIG. 8 is a set of images of a patient with a programmable shunt and intracranial hypotension, showing noise suppression capability and consistent contrast enhancement capability according to an embodiment of the invention.
- Embodiments of the present invention provide a deep learning based diagnostic imaging technique to significantly reduce contrast agent dose levels while maintaining diagnostic quality for clinical images.
- FIG. 1 and FIG. 2 Detailed illustrations of the protocol and procedure of an embodiment of the invention are shown in FIG. 1 and FIG. 2.
- FIG. 1 and FIG. 2 Detailed illustrations of the protocol and procedure of an embodiment of the invention are shown in FIG. 1 and FIG. 2.
- the embodiments described below focus on MRI imaging for purposes of illustration, the principles and techniques of the invention described herein are not limited to MRI but are generally applicable to various imaging modalities that make use of contrast agents.
- FIG. 1 is a flow chart showing a processing pipeline for an embodiment of the invention.
- a deep learning network is trained using multi -contrast images 100, 102, 104 acquired from scans of a multitude of subjects with a wide range of clinical indications.
- the images are pre- processed to perform image co-registration 106, to produce multi -contrast images 108, and data augmentation 110 to produce normalized multi -contrast image patches 112.
- Rigid or non-rigid co-registration may be used to adjust multiple image slices or volumes to match the pixels and voxels to each other. Since there can be arbitrary scaling differences between different volumes, normalization is used to match the intensity of each image/volume. Brain and anatomy masks are used optionally to extract the important regions of interests in each image/volume.
- Reference images 104 are also processed to perform co-registration and normalization 118. These pre-processed images are then used to train a deep learning network 114, which is preferably implemented using residual learning in a convolutional neural network.
- the input to the deep learning network is a zero-contrast dose image 100 and low-contrast dose image 102, while the output of the network is a synthesized prediction of a full-contrast dose image 116.
- a reference full contrast image 104 is compared with the synthesized image 116 using a loss function to train the network using error backpropagation.
- FIG. 2 illustrates the workflow of the protocol and procedure for acquisition of images used for training.
- a pre-contrast (zero-dose) image 200 is acquired, a low dose (e.g., 10%) of contrast is administered and a low-dose image 202 is acquired.
- An additional dose (e.g., 90%) of contrast is then administered to total a full 100% dose, and a full-dose image 204 is then acquired.
- images are acquired with 3T MRI scanners (GE Healthcare, Waukesha, WI, USA) using standard neuro clinical protocol with high-resolution 3D Tl -weighted inversion-recovery prepped fast-spoiled-gradient-echo (IR-FSPGR) imaging at 3T.
- IR-FSPGR fast-spoiled-gradient-echo
- high-resolution Tl -weighted IR-FSPGR pre-contrast images, post-contrast images with 10% low-dose and 100% full-dose of gadobenate dimeglumine (0.01 and 0.1 mmol/kg, respectively) full-dose images are acquired.
- similar setups are used to include at least one set of images without enhancement that is acquired without injecting contrasts. And optionally at least one set of images with low-level enhancement that is acquired with injecting low dosage of contrasts.
- the contrast is usually an iodine-based contrast agent.
- the CT contrast agent is usually administered based on preference of physicians, regulatory standard and patient weight.
- the administration of a low dose means injecting less than standard protocols.
- CT there can be multiple set of images with different contrast visualization (possibly the same dosage but different visual appearance) by using multiple energy radiation.
- the ground-truth is a set of images acquired with 100% full-dose of CT contrast.
- the contrast agent can be, for example, microbubbles.
- FIG. 3 is a schematic overview of the image processing flow.
- the steps of the detailed workflow are applied to the multi -contrast images acquired for each subject.
- acquisition stage 300 scans are performed as described above to acquire a zero-contrast image 306, low-contrast image 308, and full-contrast image 310.
- These multi -contrast images then pass through a preprocessing stage 302.
- the resulting images then are used in a deep learning training stage 304 for training a deep learning network to synthesize a full-dose image 312.
- Pre-processing steps include image co-registration and signal normalization.
- Normalization is used to remove bias in the images obtained at different dosage levels, which is performed using a mask 314 of non- enhancing tissues. Specifically, to remove the systematic differences between different signal intensity levels in non-enhancing regions (such as scalp fat), co-registration and signal normalization based on average voxel value within a mask is performed. Alternatively, the normalization can be based on max/min intensity of the images or certain percentiles. Also, the normalization can be based on matching the distribution of intensity to certain predefined or standard distribution. Mask can be applied to calculate the average, max, min, percentile or the distribution to get better normalization performance to match different set of images. This step is performed because the transmit and receive gains used for 3 sequences of the 3 different scans are not guaranteed to be the same. For CT and ultrasound the scaling and normalization is also applicable as there are possible intensity re-scaling steps in both acquisition and image storage process.
- FIG. 4 is an illustration of the signal model to synthesize full-dose contrast-enhanced MRI image 412.
- the low-dose contrast uptake 408 between pre-contrast 400 and low-dose 402 is noisy but does include contrast information.
- a deep learning network is trained using the true 100% full-dose CE-MRI images as the reference ground-truth.
- the non-contrast (zero-dose) MRI and the 10% low-dose CE-MRI are provided to the network as inputs, and the output of the network is an approximation of the full-dose CE-MRI.
- this network implicitly learns the guided denoising of the noisy contrast uptake extracted from the difference signal between low- dose and non-contrast (zero-dose) images, which can be scaled to generate the contrast enhancement of a full-dose image.
- the detailed deep learning (DL) model architecture in one embodiment of the invention is shown in FIG. 5.
- This model is an encoder-decoder convolutional neural network with 3 encoder steps 500, 502, 504 and 3 decoder steps 506, 508, 510.
- 3 convolutional layers connected by 3x3 Conv-BN-ReLU.
- Encoding steps are connected in sequence by 2x2 max-pooling, and decoder steps are connected in sequence by 2x2 up- sampling.
- Bypass concatenate connections 512, 514, 516 combine symmetric layers to avoid resolution loss.
- the residual connections 518, 520, 522, 524, 526, 528, 530 enable the model to synthesize a full-dose image by predicting the enhancement signal 540 from a difference between pre-dose image 542 and low-dose image 544.
- the cost function 532 compares the predicted full-dose image 534 and the reference ground-truth full-dose image 536, which enables the optimization of the network parameters via error backpropagation 538.
- the network can have different number of layers, image size in each layers and variable connections between layers.
- the function in each layer can be different linear or nonlinear functions.
- the output layer can have a different so-called activation function that maps to certain range of output intensity.
- the network was trained on around 300 2D slices of the co-registered 3D volumes in each patient, excluding the slices at the base of the brain which had low S R and no valuable information for anatomy or contrast. Standard image rigid transformations are used to further augment the dataset in training to ensure the robustness of the model.
- stochastic gradient descent (SGD) was used for each subset of training datasets and backpropagation is used to optimize the network parameters with respect to a cost function comparing predicted and true full-dose MR images.
- the mean-absolute- error (MAE) cost function also known as the LI loss, is used in training.
- Training takes 200 epochs with SGD and ADAM method for optimization and 10% of the training dataset with random permutations was used for validation to optimize hyper-parameters and pick out the best model among all iterations.
- SGD and ADAM are also examples of operators to solve the optimization problem in training the network. There are many other options including, for example, RMSprop and Adagrad. Basically, the optimizers enable faster and smoother convergence.
- the loss functions can include, for example, a function map from the ground-truth image and predicted image pair to a set loss values. This includes pixel-wise or voxel-wise loss that is based on pixel/voxel differences which usually use LI (mean-absolute-error) or L2 (mean- squared-error) loss. Also, the loss can be based on regions with certain size that considering similarity of the structures, e.g., SSIM structural similarities. Or the loss can be computed based on other previously or concurrently trained networks that so-called perceptual loss or adversarial loss respectively. Also the loss can be any arbitrary weighted combination of many loss functions.
- the trained deep learning network is used to synthesize a full-dose image from zero-dose and low-dose images.
- the co-registered and normalized non-contrast (zero- dose) and low-dose images are loaded from DICOM files and input to the trained network.
- With efficient forward-passing, which takes around 0.1 sec per 512-by-512 image the synthesized full-dose image is generated.
- the process is then conducted for each 2D slice to generate entire 3D volume and stored in a new DICOM folder for further evaluation.
- the predicted image 606 synthesized from pre-contrast image 600 and low-dose image 602 has similar highlighting of contrast enhancement in the lesions as full-dose image 604.
- the lesions show improved visibility in the synthesized full-contrast image 606 while they cannot be reliably appreciated in low-dose CE-MRI image 602.
- the synthesized CE-MRI image 606 shows a similar outline of a metastatic lesion in the right posterior cerebellum compared with the one in true full-contrast CE-MRI image 604.
- results demonstrate better noise suppression capability and consistent contrast enhancement capability.
- the engorgement of the dural sinuses and pachymeningeal enhancement are clearly seen on both the synthesized image 806, predicted from pre-contrast image 800 and low-dose image 802, and true full-dose image 804, while there is less noise in non-enhancing regions on the synthesized image.
- the method generates diagnostic quality contrast from the low-dose acquisition, also demonstrating improved noise suppression and contrast enhancement.
- the synthesized CE-MRI images are improved significantly (both p ⁇ 0.001) compared with the low-dose image on all quantitative metrics with 11.0% improvements in SSIM and over 5.0 dB gains in PS R.
- the proposed method is not significantly different (both p > 0.05) for overall image quality and the clarity of the enhanced contrast.
- the 10% low-contrast images are significantly (p ⁇ 0.001) worse in the clarity of the enhanced contrast, with a decrease of over 3 on a 5-point scale.
- the synthesized full- dose images show significant (p ⁇ 0.05) improvements over the acquired full-dose CE-MRI in suppressing artifacts in the non-enhancing regions, which was an additional advantage of the DL method.
- the illustrative embodiment described above uses a 2D CNN. Further performance gains are achievable with 3D CNN models considering the correlated spatial information in adjacent thin slices.
- 2D CNN process single slice of image
- 2.5D multiple slices of image
- 3D entire volume
- patch based patch from volume
- MAE loss is used as cost function to train the DL model in the above illustrative embodiment, other loss functions are envisioned, such as mixture loss functions with non-local structural similarities or using Generative Adversarial Network (GAN) as a data driven cost function.
- GAN Generative Adversarial Network
- a fixed level of gadolinium contrast is used with 90% reduction from the original clinical usage level.
- full-dose is used herein to refer to a standard dosage enabling certain defined visual quality.
- contrasts are treated as drug and they have FDA regulated full-dose that should be used.
- the full-dose means a standard dosage that is recommended or required by the FDA or clinicians to achieve good diagnostic quality.
- the technique can make use of a patch based method wherein the input image size can be different than 128x128x1. It can be any xxyxz size that uses a patch of size x*y and considering depth/thickness of z. It is also envisioned that low dose levels higher than 10% may be used. In general, it is beneficial to minimize the dose subject to the constraint that the synthesized image retains diagnostic quality. The resulting minimal low dose may vary depending on contrast agent and imaging modality.
- the network can be trained to automatically generalized to improve the lower dose to the higher dose i3 ⁇ 4igh, where the higher dose level is a full dose or less, and the lower dose level is less than the high dose, i.e., 0 ⁇ ⁇ dhigh ⁇ 1.
- the inventors also envision synthesis of high dose images from zero dose images, without the need for any contrast dose images.
- the zero dose images in this case include multiple MR images acquired using different sequences (e.g., Tlw, T2w, FLAIR, DWI) that show different appearance/intensity of different tissues.
- the trained network can then predict a synthesized full- contrast Tlw to be used for diagnosis.
- the deep learning network is trained with a first image acquired with a first image acquisition sequence and a second image acquired with a second image acquisition sequence distinct from the first image acquisition sequence, where zero contrast agent dose is administered during the diagnostic imaging when these images are acquired.
- images with other sequences may also be used. For example, in one embodiment, there are five images with five different sequences acquired.
- the training uses a reference ground truth Tlw image acquired with full contrast agent dose.
- the images are pre-processed to co-register and normalize them to adjust for acquisition and scaling differences.
- the normalization may include normalizing the intensity within each contrast via histogram matching on median.
- Pre-processing may also include bias field correction to correct the bias field distortion via N4ITK algorithm.
- Pre-processing may also include using a trained classifier (VGG16) to filter out more abnormal slices.
- the deep learning network is preferably a 2.5D deep convolutional adversarial network. Using the trained network, a first image and second image acquired with different sequences are then used as input to the deep learning network (DLN) which then generates as output a synthesized full-dose contrast agent image of the subject.
- DLN deep learning network
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Public Health (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Quality & Reliability (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
Description
Claims
Priority Applications (9)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2018346938A AU2018346938B2 (en) | 2017-10-09 | 2018-10-09 | Contrast dose reduction for medical imaging using deep learning |
CA3078728A CA3078728A1 (en) | 2017-10-09 | 2018-10-09 | Contrast dose reduction for medical imaging using deep learning |
CN201880072487.0A CN111601550B (en) | 2017-10-09 | 2018-10-09 | Contrast agent reduction for medical imaging using deep learning |
JP2020520027A JP7244499B2 (en) | 2017-10-09 | 2018-10-09 | Contrast Agent Dose Reduction in Medical Imaging Using Deep Learning |
KR1020207013155A KR20200063222A (en) | 2017-10-09 | 2018-10-09 | Contrast dose reduction in medical imaging with deep learning |
BR112020007105-6A BR112020007105A2 (en) | 2017-10-09 | 2018-10-09 | method for training a diagnostic imaging device to perform a medical diagnostic imaging with a reduced dose of contrast agent |
EP18866524.4A EP3694413A4 (en) | 2017-10-09 | 2018-10-09 | Contrast dose reduction for medical imaging using deep learning |
SG11202003232VA SG11202003232VA (en) | 2017-10-09 | 2018-10-09 | Contrast dose reduction for medical imaging using deep learning |
JP2023036252A JP7476382B2 (en) | 2017-10-09 | 2023-03-09 | Contrast agent dose reduction in medical imaging using deep learning |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762570068P | 2017-10-09 | 2017-10-09 | |
US62/570,068 | 2017-10-09 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019074938A1 true WO2019074938A1 (en) | 2019-04-18 |
Family
ID=65994007
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2018/055034 WO2019074938A1 (en) | 2017-10-09 | 2018-10-09 | Contrast dose reduction for medical imaging using deep learning |
Country Status (10)
Country | Link |
---|---|
US (2) | US10997716B2 (en) |
EP (1) | EP3694413A4 (en) |
JP (2) | JP7244499B2 (en) |
KR (1) | KR20200063222A (en) |
CN (1) | CN111601550B (en) |
AU (1) | AU2018346938B2 (en) |
BR (1) | BR112020007105A2 (en) |
CA (1) | CA3078728A1 (en) |
SG (1) | SG11202003232VA (en) |
WO (1) | WO2019074938A1 (en) |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021044153A1 (en) * | 2019-09-04 | 2021-03-11 | Oxford University Innovation Limited | Enhancement of medical images |
WO2021052896A1 (en) | 2019-09-18 | 2021-03-25 | Bayer Aktiengesellschaft | Forecast of mri images by means of a forecast model trained by supervised learning |
WO2021052850A1 (en) | 2019-09-18 | 2021-03-25 | Bayer Aktiengesellschaft | Generation of mri images of the liver |
WO2021069343A1 (en) | 2019-10-11 | 2021-04-15 | Bayer Aktiengesellschaft | Acceleration of mri examinations |
WO2022106302A1 (en) | 2020-11-20 | 2022-05-27 | Bayer Aktiengesellschaft | Representation learning |
WO2022179896A2 (en) | 2021-02-26 | 2022-09-01 | Bayer Aktiengesellschaft | Actor-critic approach for generating synthetic images |
WO2022184297A1 (en) | 2021-03-02 | 2022-09-09 | Bayer Aktiengesellschaft | Machine learning in the field of contrast-enhanced radiology |
WO2022189015A1 (en) | 2021-03-09 | 2022-09-15 | Bayer Aktiengesellschaft | Machine learning in the field of contrast-enhanced radiology |
WO2022223383A1 (en) | 2021-04-21 | 2022-10-27 | Bayer Aktiengesellschaft | Implicit registration for improving synthesized full-contrast image prediction tool |
JP2022550688A (en) * | 2019-09-25 | 2022-12-05 | サトゥル メディカル,インコーポレイテッド | Systems and methods for improving low-dose volume-enhanced MRI |
EP4174868A1 (en) | 2021-11-01 | 2023-05-03 | Bayer Aktiengesellschaft | Synthetic contrast-enhanced mr images |
WO2023073165A1 (en) | 2021-11-01 | 2023-05-04 | Bayer Aktiengesellschaft | Synthetic contrast-enhanced mr images |
EP4202855A1 (en) | 2021-12-22 | 2023-06-28 | Guerbet | Method for processing at least a pre-contrast image and a contrast image respectively depicting a body part prior to and after an injection of a dose of contrast agent |
EP4210069A1 (en) | 2022-01-11 | 2023-07-12 | Bayer Aktiengesellschaft | Synthetic contrast-enhanced ct images |
EP4233726A1 (en) | 2022-02-24 | 2023-08-30 | Bayer AG | Prediction of a representation of an area of an object to be examined after the application of different amounts of a contrast agent |
US11915361B2 (en) | 2019-09-18 | 2024-02-27 | Bayer Aktiengesellschaft | System, method, and computer program product for predicting, anticipating, and/or assessing tissue characteristics |
EP4332601A1 (en) | 2022-09-05 | 2024-03-06 | Bayer AG | Generation of artificial contrast agent-enhanced radiological recordings |
WO2024046833A1 (en) | 2022-08-30 | 2024-03-07 | Bayer Aktiengesellschaft | Generation of synthetic radiological images |
WO2024046831A1 (en) | 2022-08-30 | 2024-03-07 | Bayer Aktiengesellschaft | Generation of synthetic radiological images |
EP4336513A1 (en) | 2022-08-30 | 2024-03-13 | Bayer Aktiengesellschaft | Generation of synthetic radiological recordings |
WO2024052156A1 (en) | 2022-09-05 | 2024-03-14 | Bayer Aktiengesellschaft | Generation of artificial contrast-enhanced radiological images |
EP4369353A1 (en) | 2022-11-12 | 2024-05-15 | Bayer Aktiengesellschaft | Generation of artificial contrast agent-enhanced radiological recordings |
EP4369285A1 (en) | 2022-11-12 | 2024-05-15 | Bayer AG | Generation of artificial contrast agent-enhanced radiological recordings |
Families Citing this family (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11100621B2 (en) * | 2017-10-20 | 2021-08-24 | Imaging Biometrics, Llc | Simulated post-contrast T1-weighted magnetic resonance imaging |
EP3477583A1 (en) * | 2017-10-31 | 2019-05-01 | Koninklijke Philips N.V. | Deep-learning based processing of motion artifacts in magnetic resonance imaging data |
US11880962B2 (en) * | 2018-02-15 | 2024-01-23 | General Electric Company | System and method for synthesizing magnetic resonance images |
US11896360B2 (en) * | 2018-03-12 | 2024-02-13 | Lvis Corporation | Systems and methods for generating thin image slices from thick image slices |
US11164067B2 (en) * | 2018-08-29 | 2021-11-02 | Arizona Board Of Regents On Behalf Of Arizona State University | Systems, methods, and apparatuses for implementing a multi-resolution neural network for use with imaging intensive applications including medical imaging |
US20210358183A1 (en) * | 2018-09-28 | 2021-11-18 | Mayo Foundation For Medical Education And Research | Systems and Methods for Multi-Kernel Synthesis and Kernel Conversion in Medical Imaging |
US20200294288A1 (en) * | 2019-03-13 | 2020-09-17 | The Uab Research Foundation | Systems and methods of computed tomography image reconstruction |
CN110223255B (en) * | 2019-06-11 | 2023-03-14 | 太原科技大学 | Low-dose CT image denoising and recursion method based on residual error coding and decoding network |
US20210150671A1 (en) * | 2019-08-23 | 2021-05-20 | The Trustees Of Columbia University In The City Of New York | System, method and computer-accessible medium for the reduction of the dosage of gd-based contrast agent in magnetic resonance imaging |
JP2021036969A (en) * | 2019-08-30 | 2021-03-11 | キヤノン株式会社 | Machine learning device, machine learning method, and program |
WO2021061710A1 (en) | 2019-09-25 | 2021-04-01 | Subtle Medical, Inc. | Systems and methods for improving low dose volumetric contrast-enhanced mri |
CN110852993B (en) * | 2019-10-12 | 2024-03-08 | 拜耳股份有限公司 | Imaging method and device under action of contrast agent |
JP7394588B2 (en) * | 2019-11-07 | 2023-12-08 | キヤノン株式会社 | Information processing device, information processing method, and imaging system |
US10984530B1 (en) * | 2019-12-11 | 2021-04-20 | Ping An Technology (Shenzhen) Co., Ltd. | Enhanced medical images processing method and computing device |
EP3872754A1 (en) * | 2020-02-28 | 2021-09-01 | Siemens Healthcare GmbH | Method and system for automated processing of images when using a contrast agent in mri |
CN111325695B (en) * | 2020-02-29 | 2023-04-07 | 深圳先进技术研究院 | Low-dose image enhancement method and system based on multi-dose grade and storage medium |
CN111388000B (en) * | 2020-03-27 | 2023-08-25 | 上海杏脉信息科技有限公司 | Virtual lung air retention image prediction method and system, storage medium and terminal |
CN112508835B (en) * | 2020-12-10 | 2024-04-26 | 深圳先进技术研究院 | GAN-based contrast agent-free medical image enhancement modeling method |
WO2022120734A1 (en) * | 2020-12-10 | 2022-06-16 | 深圳先进技术研究院 | Contrast-agent-free medical image enhancement method based on gan |
CN112634390B (en) * | 2020-12-17 | 2023-06-13 | 深圳先进技术研究院 | High-energy image synthesis method and device for generating countermeasure network model based on Wasserstein |
CN112733624B (en) * | 2020-12-26 | 2023-02-03 | 电子科技大学 | People stream density detection method, system storage medium and terminal for indoor dense scene |
KR102316312B1 (en) * | 2021-02-01 | 2021-10-22 | 주식회사 클라리파이 | Apparatus and method for contrast amplification of contrast-enhanced ct images based on deep learning |
JP7376053B2 (en) | 2021-02-01 | 2023-11-08 | クラリピーアイ インコーポレイテッド | Contrast enhancement CT image contrast amplification device and method based on deep learning |
US20240065772A1 (en) * | 2021-02-03 | 2024-02-29 | Cordiguide Ltd. | Navigation assistance in a medical procedure |
EP4044120A1 (en) * | 2021-02-15 | 2022-08-17 | Koninklijke Philips N.V. | Training data synthesizer for contrast enhancing machine learning systems |
US11727087B2 (en) * | 2021-04-05 | 2023-08-15 | Nano-X Ai Ltd. | Identification of a contrast phase depicted in a medical image |
US11967066B2 (en) * | 2021-04-12 | 2024-04-23 | Daegu Gyeongbuk Institute Of Science And Technology | Method and apparatus for processing image |
CN113033704B (en) * | 2021-04-22 | 2023-11-07 | 江西理工大学 | Intelligent judging method and system for copper converter converting copper-making final point based on pattern recognition |
CN113379616B (en) * | 2021-04-28 | 2023-09-05 | 北京航空航天大学 | Method for generating gadolinium contrast agent enhanced magnetic resonance image |
US20220381861A1 (en) * | 2021-05-19 | 2022-12-01 | Siemens Healthcare Gmbh | Method and system for accelerated acquisition and artifact reduction of undersampled mri using a deep learning based 3d generative adversarial network |
EP4095796A1 (en) * | 2021-05-29 | 2022-11-30 | Bayer AG | Machine learning in the field of radiology with contrast agent |
EP4113537A1 (en) * | 2021-06-30 | 2023-01-04 | Guerbet | Methods for training a prediction model, or for processing at least a pre-contrast image depicting a body part prior to an injection of contrast agent using said prediction model |
US11903760B2 (en) * | 2021-09-08 | 2024-02-20 | GE Precision Healthcare LLC | Systems and methods for scan plane prediction in ultrasound images |
WO2023055721A1 (en) * | 2021-09-29 | 2023-04-06 | Subtle Medical, Inc. | Systems and methods for contrast dose reduction |
EP4334900A1 (en) * | 2021-10-15 | 2024-03-13 | Bracco Imaging S.p.A. | Training a machine learning model for simulating images at higher dose of contrast agent in medical imaging applications |
EP4224420A1 (en) | 2022-02-04 | 2023-08-09 | Siemens Healthcare GmbH | A computer-implemented method for determining scar segmentation |
EP4246526A1 (en) * | 2022-03-17 | 2023-09-20 | Koninklijke Philips N.V. | System and method for providing enhancing or contrast agent advisability indicator |
WO2023192392A1 (en) * | 2022-03-29 | 2023-10-05 | Angiowave Imaging, Llc | System and method for angiographic dose reduction using machine learning |
EP4290263A1 (en) * | 2022-06-09 | 2023-12-13 | Rheinische Friedrich-Wilhelms-Universität Bonn | Method and apparatus for providing mri images relating to at least one body portion of a patient with reduced administration of contrast agent |
WO2024044476A1 (en) * | 2022-08-23 | 2024-02-29 | Subtle Medical, Inc. | Systems and methods for contrast-enhanced mri |
WO2024046832A1 (en) * | 2022-08-30 | 2024-03-07 | Bayer Aktiengesellschaft | Generation of synthetic radiological images |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050171409A1 (en) * | 2004-01-30 | 2005-08-04 | University Of Chicago | Automated method and system for the detection of lung nodules in low-dose CT image for lung-cancer screening |
US20150201895A1 (en) * | 2012-08-31 | 2015-07-23 | The University Of Chicago | Supervised machine learning technique for reduction of radiation dose in computed tomography imaging |
US20160106321A1 (en) * | 2013-10-17 | 2016-04-21 | Siemens Aktiengesellschaft | Method and System for Machine Learning Based Assessment of Fractional Flow Reserve |
WO2016175755A1 (en) * | 2015-04-28 | 2016-11-03 | Siemens Healthcare Gmbh | METHOD AND SYSTEM FOR SYNTHESIZING VIRTUAL HIGH DOSE OR HIGH kV COMPUTED TOMOGRAPHY IMAGES FROM LOW DOSE OR LOW kV COMPUTED TOMOGRAPHY IMAGES |
US20180018757A1 (en) * | 2016-07-13 | 2018-01-18 | Kenji Suzuki | Transforming projection data in tomography by means of machine learning |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE10221643A1 (en) | 2002-05-15 | 2003-12-04 | Siemens Ag | Image data correcting method for MRI tomography apparatus used for e.g. heart perfusion measurement, involves subtracting value of image element in uniform region from other respective values of image element in normalized image data |
DE102006001681B4 (en) | 2006-01-12 | 2008-07-10 | Wismüller, Axel, Dipl.-Phys. Dr.med. | Method and device for displaying multi-channel image data |
JP2007264951A (en) | 2006-03-28 | 2007-10-11 | Dainippon Printing Co Ltd | Medical image correction device |
WO2009020687A2 (en) * | 2007-05-18 | 2009-02-12 | Henry Ford Health System | Mri estimation of contrast agent concentration using a neural network approach |
US8737715B2 (en) * | 2009-07-13 | 2014-05-27 | H. Lee Moffitt Cancer And Research Institute, Inc. | Methods and apparatus for diagnosis and/or prognosis of cancer |
BR112012033776A2 (en) * | 2010-07-12 | 2016-11-01 | Ge Healthcare As | composition and method of x-ray examination. |
US9378548B2 (en) * | 2011-12-01 | 2016-06-28 | St. Jude Children's Research Hospital | T2 spectral analysis for myelin water imaging |
CN103823956B (en) * | 2012-11-19 | 2017-10-31 | 西门子(中国)有限公司 | Determine that contrast agent uses method, device and the imaging device of parameter |
US20150025666A1 (en) * | 2013-07-16 | 2015-01-22 | Children's National Medical Center | Three dimensional printed replicas of patient's anatomy for medical applications |
US11647915B2 (en) * | 2014-04-02 | 2023-05-16 | University Of Virginia Patent Foundation | Systems and methods for medical imaging incorporating prior knowledge |
US9990712B2 (en) | 2015-04-08 | 2018-06-05 | Algotec Systems Ltd. | Organ detection and segmentation |
EP3295202B1 (en) * | 2015-05-08 | 2018-07-25 | Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. | Method and device for magnetic resonance imaging with improved sensitivity by noise reduction |
JP6450053B2 (en) | 2015-08-15 | 2019-01-09 | セールスフォース ドット コム インコーポレイティッド | Three-dimensional (3D) convolution with 3D batch normalization |
WO2017030276A1 (en) | 2015-08-17 | 2017-02-23 | 삼성전자(주) | Medical image display device and medical image processing method |
US10547873B2 (en) * | 2016-05-23 | 2020-01-28 | Massachusetts Institute Of Technology | System and method for providing real-time super-resolution for compressed videos |
WO2017223560A1 (en) | 2016-06-24 | 2017-12-28 | Rensselaer Polytechnic Institute | Tomographic image reconstruction via machine learning |
US10096109B1 (en) * | 2017-03-31 | 2018-10-09 | The Board Of Trustees Of The Leland Stanford Junior University | Quality of medical images using multi-contrast and deep learning |
US11100621B2 (en) * | 2017-10-20 | 2021-08-24 | Imaging Biometrics, Llc | Simulated post-contrast T1-weighted magnetic resonance imaging |
US10482600B2 (en) * | 2018-01-16 | 2019-11-19 | Siemens Healthcare Gmbh | Cross-domain image analysis and cross-domain image synthesis using deep image-to-image networks and adversarial networks |
US10789696B2 (en) * | 2018-05-24 | 2020-09-29 | Tfi Digital Media Limited | Patch selection for neural network based no-reference image quality assessment |
US10665011B1 (en) * | 2019-05-31 | 2020-05-26 | Adobe Inc. | Dynamically estimating lighting parameters for positions within augmented-reality scenes based on global and local features |
-
2018
- 2018-10-09 BR BR112020007105-6A patent/BR112020007105A2/en unknown
- 2018-10-09 SG SG11202003232VA patent/SG11202003232VA/en unknown
- 2018-10-09 CA CA3078728A patent/CA3078728A1/en active Pending
- 2018-10-09 CN CN201880072487.0A patent/CN111601550B/en active Active
- 2018-10-09 AU AU2018346938A patent/AU2018346938B2/en active Active
- 2018-10-09 JP JP2020520027A patent/JP7244499B2/en active Active
- 2018-10-09 KR KR1020207013155A patent/KR20200063222A/en not_active Application Discontinuation
- 2018-10-09 EP EP18866524.4A patent/EP3694413A4/en active Pending
- 2018-10-09 US US16/155,581 patent/US10997716B2/en active Active
- 2018-10-09 WO PCT/US2018/055034 patent/WO2019074938A1/en unknown
-
2021
- 2021-04-26 US US17/239,898 patent/US11935231B2/en active Active
-
2023
- 2023-03-09 JP JP2023036252A patent/JP7476382B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050171409A1 (en) * | 2004-01-30 | 2005-08-04 | University Of Chicago | Automated method and system for the detection of lung nodules in low-dose CT image for lung-cancer screening |
US20150201895A1 (en) * | 2012-08-31 | 2015-07-23 | The University Of Chicago | Supervised machine learning technique for reduction of radiation dose in computed tomography imaging |
US20160106321A1 (en) * | 2013-10-17 | 2016-04-21 | Siemens Aktiengesellschaft | Method and System for Machine Learning Based Assessment of Fractional Flow Reserve |
WO2016175755A1 (en) * | 2015-04-28 | 2016-11-03 | Siemens Healthcare Gmbh | METHOD AND SYSTEM FOR SYNTHESIZING VIRTUAL HIGH DOSE OR HIGH kV COMPUTED TOMOGRAPHY IMAGES FROM LOW DOSE OR LOW kV COMPUTED TOMOGRAPHY IMAGES |
US20180018757A1 (en) * | 2016-07-13 | 2018-01-18 | Kenji Suzuki | Transforming projection data in tomography by means of machine learning |
Non-Patent Citations (2)
Title |
---|
MARDANI, M. ET AL.: "Deep Generative Adversarial Networks for Compressed Sensing (GANCS) Automates MRI", 31 May 2017 (2017-05-31), XP055593025, Retrieved from the Internet <URL:https://arxiv.org/pdf/1706.00051.pdf> [retrieved on 20190206] * |
XIANG LEI ET AL.: "Deep auto-context convolutional neural networks for standard-dose PET image estimation from low-dose PET/MRI", NEUROCOMPUTING, vol. 267, pages 406 - 416, XP085154155, DOI: 10.1016/j.neucom.2017.06.048 |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021044153A1 (en) * | 2019-09-04 | 2021-03-11 | Oxford University Innovation Limited | Enhancement of medical images |
EP4231037A1 (en) | 2019-09-18 | 2023-08-23 | Bayer Aktiengesellschaft | Acceleration of mri examinations |
WO2021052896A1 (en) | 2019-09-18 | 2021-03-25 | Bayer Aktiengesellschaft | Forecast of mri images by means of a forecast model trained by supervised learning |
WO2021052850A1 (en) | 2019-09-18 | 2021-03-25 | Bayer Aktiengesellschaft | Generation of mri images of the liver |
US11915361B2 (en) | 2019-09-18 | 2024-02-27 | Bayer Aktiengesellschaft | System, method, and computer program product for predicting, anticipating, and/or assessing tissue characteristics |
US11727571B2 (en) | 2019-09-18 | 2023-08-15 | Bayer Aktiengesellschaft | Forecast of MRI images by means of a forecast model trained by supervised learning |
JP2022550688A (en) * | 2019-09-25 | 2022-12-05 | サトゥル メディカル,インコーポレイテッド | Systems and methods for improving low-dose volume-enhanced MRI |
WO2021069343A1 (en) | 2019-10-11 | 2021-04-15 | Bayer Aktiengesellschaft | Acceleration of mri examinations |
EP4241672A2 (en) | 2019-10-11 | 2023-09-13 | Bayer Aktiengesellschaft | Acceleration of mri examinations |
WO2022106302A1 (en) | 2020-11-20 | 2022-05-27 | Bayer Aktiengesellschaft | Representation learning |
WO2022179896A2 (en) | 2021-02-26 | 2022-09-01 | Bayer Aktiengesellschaft | Actor-critic approach for generating synthetic images |
WO2022184298A1 (en) * | 2021-03-02 | 2022-09-09 | Bayer Aktiengesellschaft | System, method, and computer program product for contrast-enhanced radiology using machine learning |
WO2022184297A1 (en) | 2021-03-02 | 2022-09-09 | Bayer Aktiengesellschaft | Machine learning in the field of contrast-enhanced radiology |
WO2022189015A1 (en) | 2021-03-09 | 2022-09-15 | Bayer Aktiengesellschaft | Machine learning in the field of contrast-enhanced radiology |
WO2022223383A1 (en) | 2021-04-21 | 2022-10-27 | Bayer Aktiengesellschaft | Implicit registration for improving synthesized full-contrast image prediction tool |
WO2023073165A1 (en) | 2021-11-01 | 2023-05-04 | Bayer Aktiengesellschaft | Synthetic contrast-enhanced mr images |
EP4174868A1 (en) | 2021-11-01 | 2023-05-03 | Bayer Aktiengesellschaft | Synthetic contrast-enhanced mr images |
WO2023118044A1 (en) | 2021-12-22 | 2023-06-29 | Guerbet | Method for processing at least a pre-contrast image and a contrast image respectively depicting a body part prior to and after an injection of a first dose of contrast agent |
EP4202855A1 (en) | 2021-12-22 | 2023-06-28 | Guerbet | Method for processing at least a pre-contrast image and a contrast image respectively depicting a body part prior to and after an injection of a dose of contrast agent |
WO2023135056A1 (en) | 2022-01-11 | 2023-07-20 | Bayer Aktiengesellschaft | Synthetic contrast-enhanced ct images |
EP4210069A1 (en) | 2022-01-11 | 2023-07-12 | Bayer Aktiengesellschaft | Synthetic contrast-enhanced ct images |
WO2023161041A1 (en) | 2022-02-24 | 2023-08-31 | Bayer Aktiengesellschaft | Prediction of representations of an examination area of an examination object after applications of different amounts of a contrast agent |
EP4233726A1 (en) | 2022-02-24 | 2023-08-30 | Bayer AG | Prediction of a representation of an area of an object to be examined after the application of different amounts of a contrast agent |
WO2024046833A1 (en) | 2022-08-30 | 2024-03-07 | Bayer Aktiengesellschaft | Generation of synthetic radiological images |
WO2024046831A1 (en) | 2022-08-30 | 2024-03-07 | Bayer Aktiengesellschaft | Generation of synthetic radiological images |
EP4336513A1 (en) | 2022-08-30 | 2024-03-13 | Bayer Aktiengesellschaft | Generation of synthetic radiological recordings |
EP4332601A1 (en) | 2022-09-05 | 2024-03-06 | Bayer AG | Generation of artificial contrast agent-enhanced radiological recordings |
WO2024052156A1 (en) | 2022-09-05 | 2024-03-14 | Bayer Aktiengesellschaft | Generation of artificial contrast-enhanced radiological images |
EP4369353A1 (en) | 2022-11-12 | 2024-05-15 | Bayer Aktiengesellschaft | Generation of artificial contrast agent-enhanced radiological recordings |
EP4369285A1 (en) | 2022-11-12 | 2024-05-15 | Bayer AG | Generation of artificial contrast agent-enhanced radiological recordings |
WO2024100234A1 (en) | 2022-11-12 | 2024-05-16 | Bayer Aktiengesellschaft | Generation of artificial contrast-enhanced radiological images |
WO2024100233A1 (en) | 2022-11-12 | 2024-05-16 | Bayer Aktiengesellschaft | Generation of artificial contrast-enhanced radiological images |
Also Published As
Publication number | Publication date |
---|---|
JP2020536638A (en) | 2020-12-17 |
KR20200063222A (en) | 2020-06-04 |
AU2018346938A1 (en) | 2020-04-23 |
CN111601550B (en) | 2023-12-05 |
US20210241458A1 (en) | 2021-08-05 |
US11935231B2 (en) | 2024-03-19 |
US10997716B2 (en) | 2021-05-04 |
EP3694413A4 (en) | 2021-06-30 |
EP3694413A1 (en) | 2020-08-19 |
CA3078728A1 (en) | 2019-04-18 |
AU2018346938B2 (en) | 2024-04-04 |
US20190108634A1 (en) | 2019-04-11 |
JP7244499B2 (en) | 2023-03-22 |
SG11202003232VA (en) | 2020-05-28 |
JP7476382B2 (en) | 2024-04-30 |
BR112020007105A2 (en) | 2020-09-24 |
JP2023078236A (en) | 2023-06-06 |
CN111601550A (en) | 2020-08-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11935231B2 (en) | Contrast dose reduction for medical imaging using deep learning | |
Hu et al. | Bidirectional mapping generative adversarial networks for brain MR to PET synthesis | |
Zeng et al. | Simultaneous single-and multi-contrast super-resolution for brain MRI images based on a convolutional neural network | |
AU2019234674B2 (en) | Systems and methods for generating thin image slices from thick image slices | |
US20210150671A1 (en) | System, method and computer-accessible medium for the reduction of the dosage of gd-based contrast agent in magnetic resonance imaging | |
KR20230129195A (en) | Dose reduction for medical imaging using deep convolutional neural networks | |
JP2022550688A (en) | Systems and methods for improving low-dose volume-enhanced MRI | |
Cackowski et al. | ImUnity: a generalizable VAE-GAN solution for multicenter MR image harmonization | |
Jun et al. | Parallel imaging in time‐of‐flight magnetic resonance angiography using deep multistream convolutional neural networks | |
Kim et al. | Fat-saturated image generation from multi-contrast MRIs using generative adversarial networks with Bloch equation-based autoencoder regularization | |
Faisal | Image Inpainting to Improve the Registration Performance of Multiple Sclerosis (MS) Patient Brain with Brain Atlas | |
Izadi et al. | Enhanced direct joint attenuation and scatter correction of whole-body PET images via context-aware deep networks | |
CN112700380A (en) | PET image volume correction method based on MR gradient information and deep learning | |
Zhao et al. | Using anatomic magnetic resonance image information to enhance visualization and interpretation of functional images: a comparison of methods applied to clinical arterial spin labeling images | |
Sui et al. | Gradient-guided isotropic MRI reconstruction from anisotropic acquisitions | |
Tohidi et al. | Joint synthesis of WMn MPRAGE and parameter maps using deep learning and an imaging equation | |
Lecoeur et al. | Improving white matter tractography by resolving the challenges of edema | |
Mao et al. | PET parametric imaging based on MR frequency-domain texture information | |
US20230136320A1 (en) | System and method for control of motion in medical images using aggregation | |
Ikuta et al. | Super-Resolution for Brain MR Images from a Significantly Small Amount of Training Data. Comput | |
Ding et al. | Medical Image Quality Assessment | |
CN112150401A (en) | Method for enhancing image quality based on deep learning | |
Bai et al. | DD-WGAN: Generative Adversarial Networks with Wasserstein Distance and Dual-Domain Discriminators for Low-Dose CT | |
Gan et al. | Pseudo-MRI-Guided PET Image Reconstruction Method Based on a Diffusion Probabilistic Model | |
WO2023041748A1 (en) | Apparatus and method for generating a perfusion image, and method for training an artificial neural network therefor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18866524 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 3078728 Country of ref document: CA Ref document number: 2020520027 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2018346938 Country of ref document: AU Date of ref document: 20181009 Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20207013155 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2018866524 Country of ref document: EP Effective date: 20200511 |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112020007105 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 112020007105 Country of ref document: BR Kind code of ref document: A2 Effective date: 20200408 |