US20180286037A1 - Quality of Medical Images Using Multi-Contrast and Deep Learning - Google Patents

Quality of Medical Images Using Multi-Contrast and Deep Learning Download PDF

Info

Publication number
US20180286037A1
US20180286037A1 US15/475,760 US201715475760A US2018286037A1 US 20180286037 A1 US20180286037 A1 US 20180286037A1 US 201715475760 A US201715475760 A US 201715475760A US 2018286037 A1 US2018286037 A1 US 2018286037A1
Authority
US
United States
Prior art keywords
subject
images
image
input
contrast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US15/475,760
Other versions
US10096109B1 (en
Inventor
Greg Zaharchuk
Enhao Gong
John M. Pauly
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Leland Stanford Junior University
Original Assignee
Leland Stanford Junior University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Leland Stanford Junior University filed Critical Leland Stanford Junior University
Priority to US15/475,760 priority Critical patent/US10096109B1/en
Assigned to THE BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIOR UNIVERSITY reassignment THE BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIOR UNIVERSITY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GONG, Enhao, PAULY, JOHN M., ZAHARCHUK, GREG
Priority to EP18774771.2A priority patent/EP3600026B1/en
Priority to CN201880022949.8A priority patent/CN110461228B/en
Priority to PL18774771T priority patent/PL3600026T3/en
Priority to DK21207739.0T priority patent/DK3971824T3/en
Priority to DK18774771.2T priority patent/DK3600026T3/en
Priority to EP21207739.0A priority patent/EP3971824B1/en
Priority to PCT/US2018/023383 priority patent/WO2018183044A1/en
Priority to EP22204287.1A priority patent/EP4148660B1/en
Priority to ES21207739T priority patent/ES2937424T3/en
Priority to PL21207739.0T priority patent/PL3971824T3/en
Priority to ES18774771T priority patent/ES2906349T3/en
Priority to US16/152,185 priority patent/US10467751B2/en
Publication of US20180286037A1 publication Critical patent/US20180286037A1/en
Publication of US10096109B1 publication Critical patent/US10096109B1/en
Application granted granted Critical
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/60Rotation of a whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/60
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20182Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering

Definitions

  • the invention relates to medical imaging. More specifically the invention relates to improving the quality of medical images using multi-contrast imaging, multi-lateral filters, and deep learning methods.
  • CNN convolutional neural network
  • ResNet Deep Residual Network
  • SR Super-resolution
  • SR Super-resolution
  • the SRCNN models can achieve good and similar performance compared with the model trained on large dataset (ImageNet with millions of sub-images). This is because the SRCNN model size (around 10K) is not as large as the model used for other image recognition methods.
  • the training samples the model sees can be counted as smaller local patches, which lead to tens of thousands of patches for the 91 full images. Additionally, the relatively few samples can already capture sufficient variability of natural images patches.
  • SR works try to achieve better performance for aesthetic perception but does not address the need to avoid while preserving details and pathology in for medical images.
  • ASL Arterial spin labeling
  • MRI uses the signal difference between labeled and control image to quantify the blood perfusion. It is a powerful MRI technique and is applied increasingly for research, study, and clinical diagnosis for neurological, cerebrovascular, and psychiatric diseases.
  • ASL perfusion maps typically suffer from low SNR due to its signal subtraction. The SNR can be increased if the ASL scans are repeated three or more times for clinics to achieve an acceptable image quality. However this repeating of the scans significantly increases the testing time.
  • Recently proposed multidelay ASL eASL
  • acquiring different delays further increases the time cost and results in even lower SNR and resolution due to the time constraints.
  • a method of shortening imaging time for diagnostic and functional imaging includes obtaining at least two input images of a subject, using a medical imager, where each input image includes a different contrast, generating a plurality of copies of the input images of the subject using non-local mean (NLM) filtering, using an appropriately programmed computer, where each input image copy of the subject includes different spatial characteristics, obtaining at least one reference image of the subject, using the medical imager, where at least one reference image of the subject includes imaging characteristics that are different form the input images of the subject, training a deep network model, using data augmentation on the appropriately programmed computer to adaptively tune model parameters to approximate the reference image from an initial set of the input images, and outputting an improved quality image of the subject, for analysis by a physician.
  • NLM non-local mean
  • the medical imager includes a magnetic resonance imager (MRI), or a computed tomography (CT) scanner.
  • MRI magnetic resonance imager
  • CT computed tomography
  • the data augmentation includes cropping, rotating, or flipping the input images of the subject.
  • the imaging characteristics of at least one reference image of the subject that are different from the input images of the subject include a higher SNR, a higher resolution, less artifacts, a different image contrast, an image obtained using a CT imager, or an image obtained using an MRI imager.
  • the data set includes an arterial spin labeling (ASL) dataset, an MRI dataset, or a CT dataset.
  • ASL arterial spin labeling
  • the data augmentation includes training a deep network model on a plurality of patches of the input images of the subject for the data augmentation, where output images of the subject are reassembled from individual input image patches.
  • multi-contrast information is used as a regularization for the NLM for improved regularized denoising and over-smoothing avoidance.
  • the data augmentation further includes using the input images of the subject and outputs from nonlinear filters as inputs for the deep network model, where the input images of the subject are acquired from arterial spin labeling (ASL) and other contrast images of a brain, where the NLM filtering is used on the ASL image and reguarlized using the other contrast images, where the data augmentation is used on all the input images of the subject and on images created from the NLM filters, where all the augmented data is fit into the deep network model.
  • the deep network model includes using multi-contrast patches of the other images for convolution and de-convolution with the input images of the subject that by-pass use of a whole data set to enable residual learning.
  • the input images and the reference image are from different the medical imagers, where the input image of the subject includes an MM image of the subject that is used to predict a CT image of the subject.
  • the input images of the subject, the plurality of copies of the input images of the subject and the at least one reference image of the subject includes a data set.
  • the trained deep network is applied for improving and denoising any relative low quality medical images.
  • FIG. 1 shows a flow diagram of an imaging processing algorithm for improved image target contrast, according to one embodiment of the invention.
  • FIGS. 2A-2B show flow diagrams of ( 2 A) improved training and ( 2 B) application algorithms for ASL denoising, according to one embodiment of the invention.
  • the current invention provides a method to improve the image quality of medical images.
  • the invention provides a new end-to-end deep learning framework of taking raw image data and nonlinear filter results, which include denoised raw images with different denoising levels, and adding multi-contrast image data that have similar anatomy information with different contrasts, and generating improved image data with better quality in terms of resolution and SNR.
  • the end-to-end framework of the current invention achieves better performance and faster speed.
  • the invention improves the image quality of MRI imaging that typically has low SNR and resolution, for example arterial spin labeling (ASL) MRI images.
  • ASL arterial spin labeling
  • the invention improves the image quality by using multi-contrast information from other images with the same anatomical structure but different contrasts, and using deep learning technique as an effective and efficient approach.
  • FIG. 1 shows a flow diagram of one embodiment of the invention that includes using a medical imager to obtain an image of a subject of interest, where the image is a relatively noisy, low resolution image that is input to an appropriately programmed computer.
  • the image is a relatively noisy, low resolution image that is input to an appropriately programmed computer.
  • other images of the same subject of interest (anatomy) and having different contrasts are also input to the computer.
  • NLM non-local-mean
  • Multi-contrast image co-registration is performed, then multi-contrast images are generated.
  • the image of a subject of interest is again input with the multi-contrast images.
  • the algorithm provides an image-to-patch generator to fully execute the augmentation.
  • These augmented images are input to a deep network comprising residual learning and a convolutional neural network.
  • the model is trained using a reference image of target contrast that has a relatively high SNR and better resolution, where this image is input to an image-to-patch generator to output a reference image patch of the target contrast having high SNR and better resolution for use in training.
  • An improved patch of the target contrast is generated and input to a patch-to-target generator, where an improved image of the target contrast is output, having improved SNR and resolution.
  • the current invention provides superior denoising using a data driven method that trains a model (supervised learning) based on ground truth, high SNR and high resolution images acquired using a longer scan.
  • the model is a highly nonlinear mapping from low image quality to reference high image quality images. In this way, the model achieves significantly better performance for its specific application.
  • multi-contrast images are used for denoising, where the multi-contrast image information is used in multiple steps that include, but are not limited to, improving a non-local mean (NLM) algorithm in which the similarity weights used for denoising are dependent on the similarity of multiple contrasts between images, or cropped portions of the images.
  • NLM non-local mean
  • the improvement arises in part because the SNR in multi-contrast images are much higher than the original image, such as ASL, etc., for improvement, so that the similarity can be computed more accurately.
  • the difference will be shown in a multi-contrast comparison and the lesion will not be over-smoothed.
  • the invention uses the multi-contrast information as regularization for nonlinear filters, such as a NLM filter, which better regularize denoising and avoids over-smoothing.
  • multi-contrast images are directly input to the deep network, where the deep network incorporates the multiple versions of the denoised (ASL) images and multi-contrast images as well. Due to the deep network, the model is then trained to nonlinearly ensemble all the denoised images to achieve the best image quality, that includes better SNR and better resolution.
  • ASL denoised
  • the method can achieve improvement efficiently.
  • the patch based approach in training and applying it to the deep network reduces the model complexity, and resolves the lack of medical data, where the data-augmentation includes taking patches from images to improve the final results.
  • the invention conducts this “feature-augmentation” by taking the outputs from the nonlinear filters as input for deep network, which improves the performance.
  • the model is configured for training on the multi-contrast image patches, the model then outputs the improved version of the patches, where they are used later to synthesize the entire image.
  • Thousands of image patches can be derived by data augmentation, which includes cropping, rotations and flips from one single image.
  • This patch-based approach reduces the model complexity, accelerates training, resolves the lack of data, avoids overfitting, and adds more data randomization, which helps to achieve better performance. Residual training and the multi-contrast information also helps to reduce artifacts and preserve pathology.
  • the invention uses multi-contrast information in non-conventional ways and implements a SRCNN+ResNet structure to achieve better medical image quality.
  • the invention can be used as a part of the on-scanner reconstruction to directly output improved version of images, which can be integrated to a variety of medical imaging sequence applications.
  • the invention provides a specific application for Arterial Spin Labeling (ASL) MM, which is used in the diagnosis of many neurological diseases. Applications to other types of images is within the scope of the invention, and immediate improvements could be imagined for highly sampled multidirectional Diffusion Tensor Imaging (DTI). Additionally, the invention can enable the improvement of the reconstruction of under-sampled images, such as fast MM.
  • ASL Arterial Spin Labeling
  • DTI Diffusion Tensor Imaging
  • ASL Arterial Spin Labeling
  • MRI Magnetic Reliable Imaging
  • ASL perfusion maps typically suffer from low SNR and resolution.
  • Averaging from multiple scans can improve the SNR, but at a cost of significantly increasing acquisition time.
  • the current invention provides a technique for improved ASL image quality with boosted SNR and/or resolution by incorporating the information of multi-contrast images, using nonlinear, non-local, spatial variant multi-lateral filtering, and training a deep network model to adaptively tune the final denoising level and further boost the SNR to improve image quality.
  • Various in-vivo experiments by the inventors demonstrate the superior performance of the invention, which significantly accelerates ASL acquisition and improves image quality.
  • the current invention provides a solution to this urgent issue of SNR starvation in ASL and eASL, that significantly improves image quality and accelerates ASL testing.
  • FIGS. 2A-2B show a flow diagram of a training and applying algorithms according to the current invention, which includes generating denoised ASLs with different denoising levels using multi-lateral guided filter based on low-SNR ASL image and other contrast MRI.
  • the algorithm creates multi-contrast MRI patches from original low-SNR ASL comprising its multiple denoised versions and co-registrated anatomical MRI, where T2w and PDw are shown. Further, the algorithm trains a deep network to learn the final denoising from the multi-contrast MRI patches and the corresponding patch in the reference high-SNR ASL image. Finally, the algorithm synthesizes the final denoised ASL from the output patches.
  • the invention uses a multi-lateral guided filter to conduct location-variant weighted average for each pixel.
  • the weight is based on the differences of ASL signal and multi-contrast anatomy MR (T2w, PDw, etc.) signals from each pixel and its neighbor pixels. Different from conventional Gaussian or Wavelet based denoising, this step is a non-local nonlinear location variant filter, which tends to better preserve structures and avoid over-smoothing.
  • the weighting parameter here controls the smoothness.
  • multiple denoised ASLs with different weighting parameters are obtained.
  • a stack of multi-contrast images are formed including: the original low-SNR ASL, multiple denoised ASL images with different smoothing levels, and co-registered T2w and PDw images.
  • Small multi-contrast patches are then cropped (16 ⁇ 16, etc.) from these multi-contrast images.
  • the final denoising works on these local stacks of patches which accelerates computation, reduces model complexity, and increases the training sample size ( ⁇ 10000 from one slice) to prevent any overfitting in the deep network training.
  • a process using the deep network for denoising reconstruction includes training a deep network to output the final denoising and restoration results.
  • a convolutional-deconvolutional neural network is used with the structure as the image shows in FIG. 2A .
  • the input of the deep network is the multi-contrast MM patches, and the output is the final denoised version.
  • the deep network is trained on one set of slices or image sets using high-SNR/high-resolution ASL as ground-truth, and applied on different scans and slices.
  • the final denoised ASL is formed by synthesizing the output patches.
  • the nonlinear ASL signal denoising is provided using a non-local (NLM) and a multi-contrast guided filter. Patches are then generated from high-SNR ASL reference images, low-SNR raw ASL images, multi-level denoised ASL images, and the anatomical MR images.
  • the trained deep network is applied to generate the nonlinear image restoration from the multi-contrast patches. Finally, the restored image form the stored patches is generated for output and implementation by a physician.
  • the in-vivo experiments demonstrate the invention provides superior performance for restoring ASL image with higher SNR and/or resolution (effectively as ⁇ 6 ⁇ Nex or 4+time reduction). Compared with conventional reconstruction and denoising results, the current invention can better reduce noise, preserve structures, and provide more detailed functional metrics such as CBF and transit-time maps. The invention can also be applied to complement parallel imaging and compressed sensing for further acquisition of ASL scans.
  • the present invention has now been described in accordance with several exemplary embodiments, which are intended to be illustrative in all aspects, rather than restrictive.
  • the present invention is capable of many variations in detailed implementation, which may be derived from the description contained herein by a person of ordinary skill in the art.
  • higher resolution and SNR conventional MR images such as T2w or FLAIR images, could also be effectively denoised using this approach.
  • the model could be trained to take a set of MRI images as input and a CT scan as the reference image, to create an estimate of the patient's CT scan in the situation in which one was not acquired.

Abstract

A method of improving diagnostic and functional imaging is provided by obtaining at least two input images of a subject, using a medical imager, where each input image includes a different contrast, generating a plurality of copies of the input images using non-local mean (NLM) filtering, using an appropriately programmed computer, where each input image copy of the subject includes different spatial characteristics, obtaining at least one reference image of the subject, using the medical imager, where the reference image includes imaging characteristics that are different form the input images of the subject, training a deep network model, using data augmentation on the appropriately programmed computer, to adaptively tune model parameters to approximate the reference image from an initial set of the input and reference images, with the goal of outputting an improved quality image of other sets of low SNR low resolution images, for analysis by a physician.

Description

    FIELD OF THE INVENTION
  • The invention relates to medical imaging. More specifically the invention relates to improving the quality of medical images using multi-contrast imaging, multi-lateral filters, and deep learning methods.
  • BACKGROUND OF THE INVENTION
  • With medical image denoising, multiple methods have been proposed including, Gaussian filtering, wavelet filtering, and non-local means (NLM) algorithms, where experimentation has shown the NLM (possibly combining Wavelet) is the superior method. However, all these methods still share some disadvantages such as the dependency of parameter tuning for different images. In one instance, a proposed method used the redundancy in and relationships of multi-contrast images as a prior for image denoising. Related works have been used to combine a blurry and a noisy pair of images for CMOS sensors and cameras. A further implementation used Group-Sparsity representation for image denoising, which also used the multi-contrast information, but it was not used to advance high SNR contrast to improve the noisier contrast. In another concept relating to the redundancy of multi-contrast images, regularization for compressed sensing reconstruction of undersampled multi-contrast images was demonstrated.
  • There have been recent developments in deep learning research. Specifically, recent advances in convolutional neural network (CNN) for image recognition with deep residual network, and super-resolution using CNN have shown great promise for improving image resolution. In the recent 5 years, deep learning techniques have advanced the performance of computer vision, specifically in image recognition. The Deep Residual Network (ResNet) approach has been validated as a superior network structure for Convolutional Neural Networks (CNNs) because its by-pass connection helps the performance of CNN. These advances of CNN provide computer vision algorithm super-human capability for recognition. However, it is not clear that the model can be better trained for medical imaging, since there are much fewer data sets available for training, and deep networks typically need thousands or millions of samples due to the number of parameters in the model. Further, it is not clear what network structure is the best for medical images due to the intrinsic properties of medical images in that they are not the same as recognizing common objects within photos. Finally, it is not fully known how to make sure the model does not introduce artifacts that are not in the image or miss the detail of pathology that the model has not seen from the training data.
  • Super-resolution (SR) CNN methods are used to generate super resolution for images and videos (multi-frame). In one demonstration, with 91 images (from a public benchmark dataset), the SRCNN models can achieve good and similar performance compared with the model trained on large dataset (ImageNet with millions of sub-images). This is because the SRCNN model size (around 10K) is not as large as the model used for other image recognition methods. Further, the training samples the model sees can be counted as smaller local patches, which lead to tens of thousands of patches for the 91 full images. Additionally, the relatively few samples can already capture sufficient variability of natural images patches. SR works try to achieve better performance for aesthetic perception but does not address the need to avoid while preserving details and pathology in for medical images.
  • Arterial spin labeling (ASL) MRI uses the signal difference between labeled and control image to quantify the blood perfusion. It is a powerful MRI technique and is applied increasingly for research, study, and clinical diagnosis for neurological, cerebrovascular, and psychiatric diseases. However, ASL perfusion maps typically suffer from low SNR due to its signal subtraction. The SNR can be increased if the ASL scans are repeated three or more times for clinics to achieve an acceptable image quality. However this repeating of the scans significantly increases the testing time. Recently proposed multidelay ASL (eASL) can compensate the effect of various transit delays for better sensitivity of perfusion measurement. However, acquiring different delays further increases the time cost and results in even lower SNR and resolution due to the time constraints.
  • What is needed is a method of image denoising, rather than generating super-resolution, that improves medical images having multi-contrasts.
  • SUMMARY OF THE INVENTION
  • To address the needs in the art, a method of shortening imaging time for diagnostic and functional imaging is provided that includes obtaining at least two input images of a subject, using a medical imager, where each input image includes a different contrast, generating a plurality of copies of the input images of the subject using non-local mean (NLM) filtering, using an appropriately programmed computer, where each input image copy of the subject includes different spatial characteristics, obtaining at least one reference image of the subject, using the medical imager, where at least one reference image of the subject includes imaging characteristics that are different form the input images of the subject, training a deep network model, using data augmentation on the appropriately programmed computer to adaptively tune model parameters to approximate the reference image from an initial set of the input images, and outputting an improved quality image of the subject, for analysis by a physician. Once the model parameters are set, improved quality images can be obtained without the need to acquire a reference image.
  • According to one aspect of the invention, the medical imager includes a magnetic resonance imager (MRI), or a computed tomography (CT) scanner.
  • In another aspect of the invention, the data augmentation includes cropping, rotating, or flipping the input images of the subject.
  • In a further aspect of the invention, the imaging characteristics of at least one reference image of the subject that are different from the input images of the subject include a higher SNR, a higher resolution, less artifacts, a different image contrast, an image obtained using a CT imager, or an image obtained using an MRI imager.
  • In yet another aspect of the invention, the data set includes an arterial spin labeling (ASL) dataset, an MRI dataset, or a CT dataset.
  • According to one aspect of the invention, the data augmentation includes training a deep network model on a plurality of patches of the input images of the subject for the data augmentation, where output images of the subject are reassembled from individual input image patches.
  • In another aspect of the invention, multi-contrast information is used as a regularization for the NLM for improved regularized denoising and over-smoothing avoidance.
  • In a further aspect of the invention, the data augmentation further includes using the input images of the subject and outputs from nonlinear filters as inputs for the deep network model, where the input images of the subject are acquired from arterial spin labeling (ASL) and other contrast images of a brain, where the NLM filtering is used on the ASL image and reguarlized using the other contrast images, where the data augmentation is used on all the input images of the subject and on images created from the NLM filters, where all the augmented data is fit into the deep network model. Here, the deep network model includes using multi-contrast patches of the other images for convolution and de-convolution with the input images of the subject that by-pass use of a whole data set to enable residual learning.
  • In yet another aspect of the invention, the input images and the reference image are from different the medical imagers, where the input image of the subject includes an MM image of the subject that is used to predict a CT image of the subject.
  • According to one aspect of the invention, the input images of the subject, the plurality of copies of the input images of the subject and the at least one reference image of the subject includes a data set.
  • In a further aspect of the invention, the trained deep network is applied for improving and denoising any relative low quality medical images.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows a flow diagram of an imaging processing algorithm for improved image target contrast, according to one embodiment of the invention.
  • FIGS. 2A-2B show flow diagrams of (2A) improved training and (2B) application algorithms for ASL denoising, according to one embodiment of the invention.
  • DETAILED DESCRIPTION
  • The current invention provides a method to improve the image quality of medical images. The invention provides a new end-to-end deep learning framework of taking raw image data and nonlinear filter results, which include denoised raw images with different denoising levels, and adding multi-contrast image data that have similar anatomy information with different contrasts, and generating improved image data with better quality in terms of resolution and SNR. The end-to-end framework of the current invention achieves better performance and faster speed.
  • In one embodiment, the invention improves the image quality of MRI imaging that typically has low SNR and resolution, for example arterial spin labeling (ASL) MRI images. The invention improves the image quality by using multi-contrast information from other images with the same anatomical structure but different contrasts, and using deep learning technique as an effective and efficient approach.
  • FIG. 1 shows a flow diagram of one embodiment of the invention that includes using a medical imager to obtain an image of a subject of interest, where the image is a relatively noisy, low resolution image that is input to an appropriately programmed computer. In an optional instance, other images of the same subject of interest (anatomy) and having different contrasts are also input to the computer. According to the current embodiment, multiple non-local-mean (NLM) filters having different parameters are applied to the input images, to generate multiple versions of the filtered images. Multi-contrast image co-registration is performed, then multi-contrast images are generated. An image with different contrasts and the same anatomy provided for data augmentation, where the data augmentation includes cropping, rotating, translation, and flipping. The image of a subject of interest is again input with the multi-contrast images. The algorithm provides an image-to-patch generator to fully execute the augmentation. These augmented images are input to a deep network comprising residual learning and a convolutional neural network. The model is trained using a reference image of target contrast that has a relatively high SNR and better resolution, where this image is input to an image-to-patch generator to output a reference image patch of the target contrast having high SNR and better resolution for use in training. An improved patch of the target contrast is generated and input to a patch-to-target generator, where an improved image of the target contrast is output, having improved SNR and resolution.
  • The current invention provides superior denoising using a data driven method that trains a model (supervised learning) based on ground truth, high SNR and high resolution images acquired using a longer scan. The model is a highly nonlinear mapping from low image quality to reference high image quality images. In this way, the model achieves significantly better performance for its specific application. In one embodiment, multi-contrast images are used for denoising, where the multi-contrast image information is used in multiple steps that include, but are not limited to, improving a non-local mean (NLM) algorithm in which the similarity weights used for denoising are dependent on the similarity of multiple contrasts between images, or cropped portions of the images. The improvement arises in part because the SNR in multi-contrast images are much higher than the original image, such as ASL, etc., for improvement, so that the similarity can be computed more accurately. Here, if there is a pathology that can only be seen in some of the contrasts, the difference will be shown in a multi-contrast comparison and the lesion will not be over-smoothed. The invention uses the multi-contrast information as regularization for nonlinear filters, such as a NLM filter, which better regularize denoising and avoids over-smoothing.
  • Further, multi-contrast images are directly input to the deep network, where the deep network incorporates the multiple versions of the denoised (ASL) images and multi-contrast images as well. Due to the deep network, the model is then trained to nonlinearly ensemble all the denoised images to achieve the best image quality, that includes better SNR and better resolution.
  • In practice, by using the CNN method with hardware (GPU) acceleration powered by deep learning frameworks, the method can achieve improvement efficiently.
  • Previously, deep networks usually take millions of samples to optimize its recognition performance. In the deep learning aspects of the current invention, analogous to SRCNN models, the training is done with very small datasets. What the model learns is the residual of the difference between the raw image data and ground-truth image data, which is sparser and less complex to approximate using the network structure. The invention uses by-pass connections to enable the residual learning. Here, a residual network is used and the direct model output is the estimated residual/error between low-quality and high-quality images. This “residual training” approach reduces the complexity of training and achieves better performance, where the output level is small, reducing the likelihood of introducing large image artifacts even when the model does not predict perfectly. This is important for medical images since it is unacceptable to introduce large artifacts.
  • Turning now to the patch based solution aspect of the current invention. The patch based approach in training and applying it to the deep network reduces the model complexity, and resolves the lack of medical data, where the data-augmentation includes taking patches from images to improve the final results. The invention conducts this “feature-augmentation” by taking the outputs from the nonlinear filters as input for deep network, which improves the performance. The model is configured for training on the multi-contrast image patches, the model then outputs the improved version of the patches, where they are used later to synthesize the entire image. Thousands of image patches can be derived by data augmentation, which includes cropping, rotations and flips from one single image. This patch-based approach reduces the model complexity, accelerates training, resolves the lack of data, avoids overfitting, and adds more data randomization, which helps to achieve better performance. Residual training and the multi-contrast information also helps to reduce artifacts and preserve pathology.
  • Further, the invention uses multi-contrast information in non-conventional ways and implements a SRCNN+ResNet structure to achieve better medical image quality.
  • There are many potential applications for this invention, where images can be taken from previous and future scans and generate improved version of the images. This is valuable for all medical imaging devices and PACS systems. In one embodiment, the invention can be used as a part of the on-scanner reconstruction to directly output improved version of images, which can be integrated to a variety of medical imaging sequence applications. In a further embodiment, the invention provides a specific application for Arterial Spin Labeling (ASL) MM, which is used in the diagnosis of many neurological diseases. Applications to other types of images is within the scope of the invention, and immediate improvements could be imagined for highly sampled multidirectional Diffusion Tensor Imaging (DTI). Additionally, the invention can enable the improvement of the reconstruction of under-sampled images, such as fast MM.
  • Arterial Spin Labeling (ASL) MRI is a powerful neuroimaging tool, which provides quantitative perfusion maps. However, ASL perfusion maps typically suffer from low SNR and resolution. Averaging from multiple scans (high Nex value) can improve the SNR, but at a cost of significantly increasing acquisition time. In one embodiment, the current invention provides a technique for improved ASL image quality with boosted SNR and/or resolution by incorporating the information of multi-contrast images, using nonlinear, non-local, spatial variant multi-lateral filtering, and training a deep network model to adaptively tune the final denoising level and further boost the SNR to improve image quality. Various in-vivo experiments by the inventors demonstrate the superior performance of the invention, which significantly accelerates ASL acquisition and improves image quality. The current invention provides a solution to this urgent issue of SNR starvation in ASL and eASL, that significantly improves image quality and accelerates ASL testing.
  • To summarize here, there are three main innovations of this invention that include incorporating multi-contrast information in ASL denoising, using a nonlinear spatial-variant filter to prevent over-smoothing edges, and generating the final denoising/restoration result using a deep network.
  • FIGS. 2A-2B show a flow diagram of a training and applying algorithms according to the current invention, which includes generating denoised ASLs with different denoising levels using multi-lateral guided filter based on low-SNR ASL image and other contrast MRI. The algorithm creates multi-contrast MRI patches from original low-SNR ASL comprising its multiple denoised versions and co-registrated anatomical MRI, where T2w and PDw are shown. Further, the algorithm trains a deep network to learn the final denoising from the multi-contrast MRI patches and the corresponding patch in the reference high-SNR ASL image. Finally, the algorithm synthesizes the final denoised ASL from the output patches.
  • In an exemplary multi-lateral guided filtering using multi-contrast information, for an ASL exam there is always one proton density weighted (PDw) image taken without labeling. There are also highly likely to be additional anatomical scans such as T1w, T2w, FLAIR, etc, as these are often acquired as part of a routine MRI examination. These images share the basic structure information and have much higher SNR. Here the invention uses a multi-lateral guided filter to conduct location-variant weighted average for each pixel. The weight is based on the differences of ASL signal and multi-contrast anatomy MR (T2w, PDw, etc.) signals from each pixel and its neighbor pixels. Different from conventional Gaussian or Wavelet based denoising, this step is a non-local nonlinear location variant filter, which tends to better preserve structures and avoid over-smoothing. The weighting parameter here controls the smoothness.
  • In forming the image with multi-contrast MRI patches, after the denoising, multiple denoised ASLs with different weighting parameters are obtained. A stack of multi-contrast images are formed including: the original low-SNR ASL, multiple denoised ASL images with different smoothing levels, and co-registered T2w and PDw images. Small multi-contrast patches are then cropped (16×16, etc.) from these multi-contrast images. The final denoising works on these local stacks of patches which accelerates computation, reduces model complexity, and increases the training sample size (˜10000 from one slice) to prevent any overfitting in the deep network training.
  • Next a process using the deep network for denoising reconstruction is implemented, which includes training a deep network to output the final denoising and restoration results. Here a convolutional-deconvolutional neural network is used with the structure as the image shows in FIG. 2A. The input of the deep network is the multi-contrast MM patches, and the output is the final denoised version. The deep network is trained on one set of slices or image sets using high-SNR/high-resolution ASL as ground-truth, and applied on different scans and slices. The final denoised ASL is formed by synthesizing the output patches.
  • Turning now to FIG. 2B, for applying the algorithm, the nonlinear ASL signal denoising is provided using a non-local (NLM) and a multi-contrast guided filter. Patches are then generated from high-SNR ASL reference images, low-SNR raw ASL images, multi-level denoised ASL images, and the anatomical MR images. The trained deep network is applied to generate the nonlinear image restoration from the multi-contrast patches. Finally, the restored image form the stored patches is generated for output and implementation by a physician.
  • Multiple in-vivo experiments were conducted. The performance of the algorithm for improving SNR was validated. Using 6 repetitions (Nex=6) as a reference high-SNR ASL, the results show that the algorithm according to the current invention reduces the error and noise for low-SNR ASL acquired with Nex=1, which is more than a four times reduction of absolute acquisition time comparing with the high-SNR reference scan. The performance of the algorithm was then validated on improving both SNR and resolution in multidelay ASL. The results demonstrate better image quality for each delay time as well as the improved transit time maps computed from them.
  • The in-vivo experiments demonstrate the invention provides superior performance for restoring ASL image with higher SNR and/or resolution (effectively as ˜6× Nex or 4+time reduction). Compared with conventional reconstruction and denoising results, the current invention can better reduce noise, preserve structures, and provide more detailed functional metrics such as CBF and transit-time maps. The invention can also be applied to complement parallel imaging and compressed sensing for further acquisition of ASL scans.
  • The present invention has now been described in accordance with several exemplary embodiments, which are intended to be illustrative in all aspects, rather than restrictive. Thus, the present invention is capable of many variations in detailed implementation, which may be derived from the description contained herein by a person of ordinary skill in the art. For example, higher resolution and SNR conventional MR images (such as T2w or FLAIR images, could also be effectively denoised using this approach. Or, the model could be trained to take a set of MRI images as input and a CT scan as the reference image, to create an estimate of the patient's CT scan in the situation in which one was not acquired.
  • All such variations are considered to be within the scope and spirit of the present invention as defined by the following claims and their legal equivalents.

Claims (12)

What is claimed:
1) A method of shortening imaging time for diagnostic and functional imaging, comprising:
a) obtaining at least two input images of a subject, using a medical imager, wherein each said input image comprises a different contrast;
b) generating a plurality of copies of said input images of said subject using non-local mean (NLM) filtering, using an appropriately programmed computer, wherein each said input image copy of said subject comprises different spatial characteristics;
c) obtaining at least one reference image of said subject, using said medical imager, wherein said at least one reference image of said subject comprises imaging characteristics that are different form said input images of said subject;
d) training a deep network model, using data augmentation on said appropriately programmed computer, to adaptively tune model parameters to approximate said reference image from an initial set of said input images; and
e) outputting an improved quality image of said subject, for analysis by a physician.
2) The method according to claim 1, wherein said medical imager comprises a magnetic resonance imager (MRI), or a computed tomography (CT) scanner.
3) The method according to claim 1, wherein said data augmentation is selected from the group consisting of cropping, rotating, and flipping said input images of said subject.
4) The method according to claim 1, wherein said imaging characteristics of said at least one reference image of said subject that are different from said input images of said subject are selected from the group consisting of a higher SNR, a higher resolution, less artifacts, a different image contrast, an image obtained using a CT imager, and an image obtained using an MRI imager.
5) The method according to claim 1, wherein said data set is selected from the group consisting of an arterial spin labeling (ASL) dataset, an MRI dataset, and a CT dataset.
6) The method according to claim 1, wherein said data augmentation comprises training a deep network model on a plurality of patches of the said input images of said subject for said data augmentation, wherein output images of said subject are reassembled from individual said input image patches.
7) The method according to claim 1, wherein multi-contrast information is used as a regularization for said NLM for improved regularized denoising and over-smoothing avoidance.
8) The method according to claim 1, wherein said data augmentation further comprises using said input images of said subject and outputs from nonlinear filters as inputs for said deep network model, wherein said input images of said subject are acquired from arterial spin labeling (ASL) and other contrast images of a brain, wherein said NLM filtering is used on said ASL image and regularized using said other contrast images, wherein said data augmentation is used on all said input images of said subject and on images created from said NLM filters, wherein all said augmented data is fit into said deep network model.
9) The method according to claim 8, wherein said deep network model comprises using multi-contrast patches of said other images for convolution and de-convolution with said input images of said subject that by-pass use of a whole data set to enable residual learning.
10) The method according to claim 1, wherein said input images and said reference image are from different said medical imagers, wherein said input image of said subject comprises an MRI image of said subject that is used to predict a CT image of said subject.
11) The method according to claim 1, wherein said input images of said subject, said plurality of copies of said input images of said subject and said at least one reference image of said subject comprises a data set.
12) The method according to claim 1, wherein said trained deep network is applied for improving and denoising any relative low quality medical images.
US15/475,760 2017-03-31 2017-03-31 Quality of medical images using multi-contrast and deep learning Active 2037-05-30 US10096109B1 (en)

Priority Applications (13)

Application Number Priority Date Filing Date Title
US15/475,760 US10096109B1 (en) 2017-03-31 2017-03-31 Quality of medical images using multi-contrast and deep learning
EP22204287.1A EP4148660B1 (en) 2017-03-31 2018-03-20 Improving quality of medical images using multi-contrast and deep learning
PL21207739.0T PL3971824T3 (en) 2017-03-31 2018-03-20 Improving quality of medical images using multi-contrast and deep learning
PL18774771T PL3600026T3 (en) 2017-03-31 2018-03-20 Improving quality of medical images using multi-contrast and deep learning
DK21207739.0T DK3971824T3 (en) 2017-03-31 2018-03-20 IMPROVING THE QUALITY OF MEDICAL IMAGES USING MULTICONTRAST AND DEEP LEARNING
DK18774771.2T DK3600026T3 (en) 2017-03-31 2018-03-20 IMPROVING THE QUALITY OF MEDICAL IMAGES USING MULTI CONTRAST AND DEPTH LEARNING
EP21207739.0A EP3971824B1 (en) 2017-03-31 2018-03-20 Improving quality of medical images using multi-contrast and deep learning
PCT/US2018/023383 WO2018183044A1 (en) 2017-03-31 2018-03-20 Improving quality of medical images using multi-contrast and deep learning
EP18774771.2A EP3600026B1 (en) 2017-03-31 2018-03-20 Improving quality of medical images using multi-contrast and deep learning
ES21207739T ES2937424T3 (en) 2017-03-31 2018-03-20 Improving the quality of medical images using multi-contrast and deep learning
CN201880022949.8A CN110461228B (en) 2017-03-31 2018-03-20 Improving quality of medical images using multi-contrast and deep learning
ES18774771T ES2906349T3 (en) 2017-03-31 2018-03-20 Improving the quality of medical images using multicontrast and deep learning
US16/152,185 US10467751B2 (en) 2017-03-31 2018-10-04 Quality of medical images using multiple-contrast and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/475,760 US10096109B1 (en) 2017-03-31 2017-03-31 Quality of medical images using multi-contrast and deep learning

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/152,185 Continuation US10467751B2 (en) 2017-03-31 2018-10-04 Quality of medical images using multiple-contrast and deep learning

Publications (2)

Publication Number Publication Date
US20180286037A1 true US20180286037A1 (en) 2018-10-04
US10096109B1 US10096109B1 (en) 2018-10-09

Family

ID=63670653

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/475,760 Active 2037-05-30 US10096109B1 (en) 2017-03-31 2017-03-31 Quality of medical images using multi-contrast and deep learning
US16/152,185 Active US10467751B2 (en) 2017-03-31 2018-10-04 Quality of medical images using multiple-contrast and deep learning

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/152,185 Active US10467751B2 (en) 2017-03-31 2018-10-04 Quality of medical images using multiple-contrast and deep learning

Country Status (7)

Country Link
US (2) US10096109B1 (en)
EP (3) EP4148660B1 (en)
CN (1) CN110461228B (en)
DK (2) DK3971824T3 (en)
ES (2) ES2937424T3 (en)
PL (2) PL3600026T3 (en)
WO (1) WO2018183044A1 (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190065884A1 (en) * 2017-08-22 2019-02-28 Boe Technology Group Co., Ltd. Training method and device of neural network for medical image processing, and medical image processing method and device
CN109472351A (en) * 2018-10-25 2019-03-15 深圳市康拓普信息技术有限公司 A kind of method and system of quick trained deep learning model
US10393842B1 (en) * 2018-02-20 2019-08-27 The Board Of Trustees Of The Leland Stanford Junior University Highly-scalable image reconstruction using deep convolutional neural networks with bandpass filtering
CN110309713A (en) * 2019-05-22 2019-10-08 深圳壹账通智能科技有限公司 Expression Recognition model training method, device, equipment and storage medium
CN111242846A (en) * 2020-01-07 2020-06-05 福州大学 Fine-grained scale image super-resolution method based on non-local enhancement network
US10726573B2 (en) * 2016-08-26 2020-07-28 Pixart Imaging Inc. Object detection method and system based on machine learning
US10726291B2 (en) 2016-08-26 2020-07-28 Pixart Imaging Inc. Image recognition method and system based on deep learning
CN111553860A (en) * 2020-04-29 2020-08-18 北京理工大学 Deep learning non-neighborhood averaging processing method and system for water color remote sensing image
CN111870245A (en) * 2020-07-02 2020-11-03 西安交通大学 Cross-contrast-guided ultra-fast nuclear magnetic resonance imaging deep learning method
WO2020231016A1 (en) * 2019-05-16 2020-11-19 Samsung Electronics Co., Ltd. Image optimization method, apparatus, device and storage medium
CN112470190A (en) * 2019-09-25 2021-03-09 深透医疗公司 System and method for improving low dose volume contrast enhanced MRI
CN112801128A (en) * 2020-12-14 2021-05-14 深圳云天励飞技术股份有限公司 Non-motor vehicle identification method, device, electronic equipment and storage medium
CN113012077A (en) * 2020-10-20 2021-06-22 杭州微帧信息科技有限公司 Denoising method based on convolution guide graph filtering
US11056220B2 (en) * 2018-11-21 2021-07-06 Enlitic, Inc. Utilizing density properties of anatomical features in an intensity transform augmentation system
US20210264192A1 (en) * 2018-07-31 2021-08-26 Sony Semiconductor Solutions Corporation Solid-state imaging device and electronic device
US11166022B2 (en) * 2019-06-04 2021-11-02 Google Llc Quantization constrained neural image coding
US11182877B2 (en) 2018-08-07 2021-11-23 BlinkAI Technologies, Inc. Techniques for controlled generation of training data for machine learning enabled image enhancement
CN114129171A (en) * 2021-12-01 2022-03-04 山东省人工智能研究院 Electrocardiosignal noise reduction method based on improved residual error dense network
US11354782B2 (en) 2017-08-04 2022-06-07 Outward, Inc. Machine learning based image processing techniques

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200063222A (en) * 2017-10-09 2020-06-04 더 보드 어브 트러스티스 어브 더 리랜드 스탠포드 주니어 유니버시티 Contrast dose reduction in medical imaging with deep learning
JP7324195B2 (en) 2017-10-23 2023-08-09 コーニンクレッカ フィリップス エヌ ヴェ Optimizing Positron Emission Tomography System Design Using Deep Imaging
WO2019161043A1 (en) * 2018-02-15 2019-08-22 GE Precision Healthcare LLC System and method for synthesizing magnetic resonance images
US10915990B2 (en) 2018-10-18 2021-02-09 General Electric Company Systems and methods for denoising medical images with deep learning network
CN109658357A (en) * 2018-12-24 2019-04-19 北京理工大学 A kind of denoising method towards remote sensing satellite image
CN109978764B (en) * 2019-03-11 2021-03-02 厦门美图之家科技有限公司 Image processing method and computing device
CN109949240B (en) * 2019-03-11 2021-05-04 厦门美图之家科技有限公司 Image processing method and computing device
CN112133410A (en) * 2019-06-25 2020-12-25 西门子医疗有限公司 MRI image reconstruction using machine learning
CN110335217A (en) * 2019-07-10 2019-10-15 东北大学 One kind being based on the decoded medical image denoising method of 3D residual coding
CN110428415B (en) * 2019-08-05 2022-05-13 上海联影医疗科技股份有限公司 Medical image quality evaluation method, device, equipment and storage medium
EP4031893A1 (en) 2019-09-18 2022-07-27 Bayer Aktiengesellschaft Generation of mri images of the liver
US11915361B2 (en) 2019-09-18 2024-02-27 Bayer Aktiengesellschaft System, method, and computer program product for predicting, anticipating, and/or assessing tissue characteristics
AU2020347797A1 (en) 2019-09-18 2022-03-31 Bayer Aktiengesellschaft Forecast of MRI images by means of a forecast model trained by supervised learning
EP4041057A1 (en) 2019-10-11 2022-08-17 Bayer Aktiengesellschaft Acceleration of mri examinations
US11320357B2 (en) 2019-12-23 2022-05-03 Chevron U.S.A. Inc. System and method for estimation of rock properties from core images
US11348243B2 (en) 2020-01-24 2022-05-31 GE Precision Healthcare LLC Systems and methods for medical image style transfer using deep neural networks
US11346912B2 (en) 2020-07-23 2022-05-31 GE Precision Healthcare LLC Systems and methods of generating robust phase images in magnetic resonance images
CN116057937A (en) * 2020-09-15 2023-05-02 三星电子株式会社 Method and electronic device for detecting and removing artifacts/degradation in media
CN112767260A (en) * 2020-12-30 2021-05-07 上海联影智能医疗科技有限公司 Image quality improving method and device, computer equipment and storage medium
AU2021430556A1 (en) 2021-03-02 2023-08-31 Bayer Aktiengesellschaft Machine learning in the field of contrast-enhanced radiology
CA3215244A1 (en) 2021-03-09 2022-09-15 Bayer Aktiengesellschaft Machine learning in the field of contrast-enhanced radiology
JP2022178846A (en) * 2021-05-21 2022-12-02 学校法人藤田学園 Medical information processing method, medical information processing device, and medical image processing device

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102010041107A1 (en) * 2010-09-21 2012-01-19 Siemens Aktiengesellschaft Image processing method for processing magnetic resonance image data of patient, involves superimposing edge image and smoothed raw image data for creating edge-amplified image, and displaying and processing edge-amplified image again
NL2008702A (en) * 2011-05-25 2012-11-27 Asml Netherlands Bv Computational process control.
WO2013082207A1 (en) * 2011-12-01 2013-06-06 St. Jude Children's Research Hospital T2 spectral analysis for myelin water imaging
US20130202079A1 (en) * 2012-02-07 2013-08-08 Lifeng Yu System and Method for Controlling Radiation Dose for Radiological Applications
US9874623B2 (en) * 2012-04-20 2018-01-23 University Of Virginia Patent Foundation Systems and methods for regularized reconstructions in MRI using side information
WO2015009830A1 (en) * 2013-07-16 2015-01-22 Children's National Medical Center Three dimensional printed replicas of patient's anatomy for medical applications
US11647915B2 (en) * 2014-04-02 2023-05-16 University Of Virginia Patent Foundation Systems and methods for medical imaging incorporating prior knowledge
US9953246B2 (en) * 2014-12-16 2018-04-24 The Regents Of The University Of California Feature-preserving noise removal
EP3295202B1 (en) * 2015-05-08 2018-07-25 Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. Method and device for magnetic resonance imaging with improved sensitivity by noise reduction
US10561337B2 (en) * 2015-08-04 2020-02-18 University Of Virginia Patent Foundation Rapid 3D dynamic arterial spin labeling with a sparse model-based image reconstruction
US10049446B2 (en) * 2015-12-18 2018-08-14 Carestream Health, Inc. Accelerated statistical iterative reconstruction
US9875527B2 (en) * 2016-01-15 2018-01-23 Toshiba Medical Systems Corporation Apparatus and method for noise reduction of spectral computed tomography images and sinograms using a whitening transform
US10547873B2 (en) * 2016-05-23 2020-01-28 Massachusetts Institute Of Technology System and method for providing real-time super-resolution for compressed videos
US10311552B2 (en) * 2017-04-06 2019-06-04 Pixar De-noising images using machine learning

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10726573B2 (en) * 2016-08-26 2020-07-28 Pixart Imaging Inc. Object detection method and system based on machine learning
US10726291B2 (en) 2016-08-26 2020-07-28 Pixart Imaging Inc. Image recognition method and system based on deep learning
US11790491B2 (en) * 2017-08-04 2023-10-17 Outward, Inc. Machine learning based image processing techniques
US11354782B2 (en) 2017-08-04 2022-06-07 Outward, Inc. Machine learning based image processing techniques
US11449967B2 (en) 2017-08-04 2022-09-20 Outward, Inc. Machine learning based image processing techniques
US11810270B2 (en) 2017-08-04 2023-11-07 Outward, Inc. Machine learning training images from a constrained set of three-dimensional object models associated with prescribed scene types
US11636664B2 (en) * 2017-08-22 2023-04-25 Boe Technology Group Co., Ltd. Training method and device of neural network for medical image processing, and medical image processing method and device
US20190065884A1 (en) * 2017-08-22 2019-02-28 Boe Technology Group Co., Ltd. Training method and device of neural network for medical image processing, and medical image processing method and device
US10393842B1 (en) * 2018-02-20 2019-08-27 The Board Of Trustees Of The Leland Stanford Junior University Highly-scalable image reconstruction using deep convolutional neural networks with bandpass filtering
US20210264192A1 (en) * 2018-07-31 2021-08-26 Sony Semiconductor Solutions Corporation Solid-state imaging device and electronic device
US11643014B2 (en) 2018-07-31 2023-05-09 Sony Semiconductor Solutions Corporation Image capturing device and vehicle control system
US11820289B2 (en) * 2018-07-31 2023-11-21 Sony Semiconductor Solutions Corporation Solid-state imaging device and electronic device
US11182877B2 (en) 2018-08-07 2021-11-23 BlinkAI Technologies, Inc. Techniques for controlled generation of training data for machine learning enabled image enhancement
CN109472351A (en) * 2018-10-25 2019-03-15 深圳市康拓普信息技术有限公司 A kind of method and system of quick trained deep learning model
US11669790B2 (en) 2018-11-21 2023-06-06 Enlitic, Inc. Intensity transform augmentation system and methods for use therewith
US11056220B2 (en) * 2018-11-21 2021-07-06 Enlitic, Inc. Utilizing density properties of anatomical features in an intensity transform augmentation system
WO2020231016A1 (en) * 2019-05-16 2020-11-19 Samsung Electronics Co., Ltd. Image optimization method, apparatus, device and storage medium
US11887218B2 (en) 2019-05-16 2024-01-30 Samsung Electronics Co., Ltd. Image optimization method, apparatus, device and storage medium
WO2020233368A1 (en) * 2019-05-22 2020-11-26 深圳壹账通智能科技有限公司 Expression recognition model training method and apparatus, and device and storage medium
CN110309713A (en) * 2019-05-22 2019-10-08 深圳壹账通智能科技有限公司 Expression Recognition model training method, device, equipment and storage medium
US11166022B2 (en) * 2019-06-04 2021-11-02 Google Llc Quantization constrained neural image coding
US11849113B2 (en) 2019-06-04 2023-12-19 Google Llc Quantization constrained neural image coding
US11624795B2 (en) 2019-09-25 2023-04-11 Subtle Medical, Inc. Systems and methods for improving low dose volumetric contrast-enhanced MRI
US20230296709A1 (en) * 2019-09-25 2023-09-21 Subtle Medical, Inc. Systems and methods for improving low dose volumetric contrast-enhanced mri
EP4035124A4 (en) * 2019-09-25 2023-10-11 Subtle Medical, Inc. Systems and methods for improving low dose volumetric contrast-enhanced mri
CN112470190A (en) * 2019-09-25 2021-03-09 深透医疗公司 System and method for improving low dose volume contrast enhanced MRI
CN111242846A (en) * 2020-01-07 2020-06-05 福州大学 Fine-grained scale image super-resolution method based on non-local enhancement network
CN111553860A (en) * 2020-04-29 2020-08-18 北京理工大学 Deep learning non-neighborhood averaging processing method and system for water color remote sensing image
CN111870245A (en) * 2020-07-02 2020-11-03 西安交通大学 Cross-contrast-guided ultra-fast nuclear magnetic resonance imaging deep learning method
CN113012077A (en) * 2020-10-20 2021-06-22 杭州微帧信息科技有限公司 Denoising method based on convolution guide graph filtering
CN112801128A (en) * 2020-12-14 2021-05-14 深圳云天励飞技术股份有限公司 Non-motor vehicle identification method, device, electronic equipment and storage medium
CN114129171A (en) * 2021-12-01 2022-03-04 山东省人工智能研究院 Electrocardiosignal noise reduction method based on improved residual error dense network

Also Published As

Publication number Publication date
WO2018183044A1 (en) 2018-10-04
CN110461228A (en) 2019-11-15
EP3600026B1 (en) 2021-11-17
EP4148660A1 (en) 2023-03-15
US20190035078A1 (en) 2019-01-31
EP4148660B1 (en) 2024-03-06
PL3971824T3 (en) 2023-03-06
US10467751B2 (en) 2019-11-05
CN110461228B (en) 2024-01-30
EP3971824B1 (en) 2022-11-02
DK3600026T3 (en) 2022-02-07
EP3600026A4 (en) 2020-12-30
US10096109B1 (en) 2018-10-09
EP3600026A1 (en) 2020-02-05
ES2937424T3 (en) 2023-03-28
DK3971824T3 (en) 2023-01-30
ES2906349T3 (en) 2022-04-18
PL3600026T3 (en) 2022-03-07
EP3971824A1 (en) 2022-03-23

Similar Documents

Publication Publication Date Title
US10467751B2 (en) Quality of medical images using multiple-contrast and deep learning
Armanious et al. Unsupervised medical image translation using cycle-MedGAN
Faragallah et al. A comprehensive survey analysis for present solutions of medical image fusion and future directions
US10346974B2 (en) Apparatus and method for medical image processing
CN110809782B (en) Attenuation correction system and method
JP6855223B2 (en) Medical image processing device, X-ray computer tomographic imaging device and medical image processing method
Campello et al. Combining multi-sequence and synthetic images for improved segmentation of late gadolinium enhancement cardiac MRI
WO2021102644A1 (en) Image enhancement method and apparatus, and terminal device
Zhou et al. Deep learning methods for medical image fusion: A review
Rajalingam et al. Combining multi-modality medical image fusion based on hybrid intelligence for disease identification
Izadi et al. Enhanced direct joint attenuation and scatter correction of whole-body PET images via context-aware deep networks
Wang et al. MSE-Fusion: Weakly supervised medical image fusion with modal synthesis and enhancement
CN113762522A (en) Training method and device of machine learning model and reconstruction method and device of image
US11481934B2 (en) System, method, and computer-accessible medium for generating magnetic resonance imaging-based anatomically guided positron emission tomography reconstruction images with a convolutional neural network
Malczewski PET image reconstruction using compressed sensing
Sander et al. Autoencoding low-resolution MRI for semantically smooth interpolation of anisotropic MRI
CN111340903A (en) Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image
CN111047512A (en) Image enhancement method and device and terminal equipment
Joshi et al. Multi-Modality Medical Image Fusion Using SWT & Speckle Noise Reduction with Bidirectional Exact Pattern Matching Algorithm
Iddrisu et al. 3D reconstructions of brain from MRI scans using neural radiance fields
Zhao et al. Medical image super-resolution with deep networks
Lu et al. Unified dual-modality image reconstruction with dual dictionaries
CN111311531A (en) Image enhancement method and device, console equipment and medical imaging system
Chandra et al. Local contrast‐enhanced MR images via high dynamic range processing
EP4152037A1 (en) Apparatus and method for generating a perfusion image, and method for training an artificial neural network therefor

Legal Events

Date Code Title Description
AS Assignment

Owner name: THE BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZAHARCHUK, GREG;GONG, ENHAO;PAULY, JOHN M.;REEL/FRAME:043347/0570

Effective date: 20170331

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4