CN107545309B - Image quality scoring using depth generation machine learning models - Google Patents

Image quality scoring using depth generation machine learning models Download PDF

Info

Publication number
CN107545309B
CN107545309B CN201710487019.7A CN201710487019A CN107545309B CN 107545309 B CN107545309 B CN 107545309B CN 201710487019 A CN201710487019 A CN 201710487019A CN 107545309 B CN107545309 B CN 107545309B
Authority
CN
China
Prior art keywords
image
machine learning
training
generating
discriminative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710487019.7A
Other languages
Chinese (zh)
Other versions
CN107545309A (en
Inventor
B.L.奥德里
B.马耶
H.E.塞廷古尔
陈潇
M.S.纳达尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens Healthcare GmbH
Original Assignee
Siemens Healthcare GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Healthcare GmbH filed Critical Siemens Healthcare GmbH
Publication of CN107545309A publication Critical patent/CN107545309A/en
Application granted granted Critical
Publication of CN107545309B publication Critical patent/CN107545309B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24143Distances to neighbourhood prototypes, e.g. restricted Coulomb energy networks [RCEN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/191Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • G06V30/1914Determining representative reference patterns, e.g. averaging or distorting patterns; Generating dictionaries, e.g. user dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/191Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • G06V30/19173Classification techniques
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Primary Health Care (AREA)
  • Artificial Intelligence (AREA)
  • Epidemiology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Computing Systems (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image quality score using a depth-generated machine learning model. For image quality scoring of images from medical scanners, deep machine learning may be used to create generative models that expect good quality images. The deviation of the input image from the generated model is used as an input feature vector to the discriminant model. The discriminant model may also operate on another input feature vector derived from the input image. Based on these input feature vectors, the discrimination model outputs an image quality score.

Description

Image quality scoring using depth generation machine learning models
RELATED APPLICATIONS
This patent document claims the benefit of the filing date of provisional U.S. patent application serial No. 62/353,737 filed 2016, 6, 23, according to U.S. code, volume 35, article 119 (e), which is hereby incorporated by reference.
Technical Field
The present embodiment relates to scoring image quality.
Background
In medical imaging, the process of image acquisition and reconstruction inevitably introduces artifacts. One or more of different types of artifacts (such as motion blur, noise, line artifacts, or grayscale non-uniformity) are in the generated image.
The scoring system assesses image quality after acquisition and helps determine whether sufficient clinical value can be extracted and thus lead to a correct diagnosis. The scoring system assesses the extent and severity of the artifact by assigning each type of artifact an integer between 1 and 5. A global quality score is derived from those artifact-specific scores. This process may be manual and therefore may not be consistent. Computerized scoring schemes for photographs may not be suitable for medical images.
Disclosure of Invention
By way of introduction, the preferred embodiments described below include methods, systems, instructions, and non-transitory computer-readable media for image quality scoring of images from medical scanners. Using deep machine learning, a generative model can be created that expects good quality images. The deviation of the input image from the generated model is used as an input feature vector for the discriminant model. The discriminant model may also operate on another input feature vector derived from the input image. Based on the input feature vectors, the discrimination model outputs an image quality score.
In a first aspect, a method for image quality scoring of images from a medical scanner is provided. The medical scanner generates an image representative of a patient. Due to the generation by the medical scanner, the image has a certain level of artifacts. The machine utilizes a depth-generating machine learning model to determine a probability map of artifacts as a function of position for the image, and assigns a quality score for the image if the probability map is applied to a discriminative machine learning classifier. A quality score for the image of the patient is communicated.
In a second aspect, a method for training a machine to determine an image quality score is provided. The machine trains a deep generative model using piecewise differentiable functions. The depth generation model is trained to output a spatial distribution of probabilities in response to an input image. The machine trains a discriminative classifier to output a score for image quality based on an input of a spatial distribution of probabilities.
In a third aspect, a method for image quality scoring of images from a medical scanner is provided. The medical scanner generates an image representative of a patient. Due to the generation by the medical scanner, the image has a certain level of artifacts. The machine utilizes a depth-generating machine learning model to determine a probability map of artifacts as a function of position for the image, and assigns a quality score for the image if the probability map is applied to a discriminative machine learning classifier. The probability map is a first input vector and the feature of the image is a second input vector. A quality score for the image of the patient is communicated.
The present invention is defined by the following claims, and nothing in this section should be taken as a limitation on those claims. Additional aspects and advantages of the invention are discussed below in conjunction with the preferred embodiments and may be claimed later, either individually or in combination.
Drawings
The components and figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. Moreover, in the figures, like reference numerals designate corresponding parts throughout the different views.
FIG. 1 illustrates an example chain of processes for scoring images for quality using generative models;
FIG. 2 illustrates one embodiment of a method for image quality scoring of images from a medical scanner;
FIG. 3 is a flow diagram of one embodiment of a method for training a machine to determine an image quality score;
FIG. 4 illustrates another example process chain for scoring an image for quality using a generative model; and
fig. 5 is a block diagram of one embodiment of a system for machine learning and/or using machine learning models for image quality scoring.
Detailed Description
Depth-generating models are used to score image quality for medical or other images. The depth generation model directly evaluates the probability that the new image belongs to the same category as the training data. For example, the depth generation model may learn how to generate images of birds from multiple images of birds. While those models have shown their ability to synthesize images that look natural (such as birds), generative models are rarely seen for use in other tasks due to their complexity, which does not allow easy manipulation by inference or optimization algorithms. Depth-generating models are rarely used in medical imaging.
Scoring the image quality is based on the presence of any artifact in the image and the extent and/or severity of the artifact. Typically, the two criteria are scored independently and manually, and a global rating score is derived accordingly. Rather than manually scoring, the discriminative and generative features are combined to produce a score. The image quality score is based on the learned depth image features, including features output by the generative model.
FIG. 1 illustrates one embodiment of a process flow or chain. The process represents machine training or application of a machine learning model. Two or more instances of machine training or applications are performed. The generative model 22 is learned. The generative model 22 learns to output a probability map 24 based on the input of the image 20 to be scored. The discriminative classifier 26 learns to score using the output of the generative model 22 (i.e., the probability map 24). The generative model 22 may be learned as part of learning the discriminative classifier 26, or both may be machine learned separately.
Fig. 2 illustrates one embodiment of a method for image quality scoring of images from a medical scanner. FIG. 2 relates to the application of the generative model 22 and the discriminative classifier 26. The method is described in the context of medical imaging, but may be applied in other contexts (e.g. photographs, material testing, astronomy or seismic sensing). The generative model 22 determines a probability map 24 of the spatial distribution of likelihood that the input image 20 is normal. The discrimination classifier 26 uses the probability map 24 to assign a score of the quality of the input image 20.
Additional, different, or fewer acts may be provided. For example, act 30 is replaced with loading a medical image or other type of image. As another example, acts 36 and/or 38 are not performed.
The acts are performed by the system of fig. 5, other systems, medical scanner, workstation, computer and/or server. For example, 30 is performed by a medical scanner. Acts 32-38 are performed by a processing component, such as a medical scanner, workstation, or computer. The actions are performed in the order shown (e.g., top to bottom) or in another order.
In act 30, the medical scanner generates an image representative of the patient. Such that the image may be obtained by or within a medical scanner. The processor may extract data from a picture archiving communication system or a medical records database. Alternatively, data not in the medical environment is acquired, such as capturing or loading a photograph or video. In alternative embodiments, other sensors (such as a camera) may generate the image.
A medical image or data set is acquired by a medical scanner. Alternatively, the acquisition is from a storage device or memory, such as a previously created data set acquired from a PACS. The acquisition may be via transmission over a network.
The image is medical imaging data. The medical image is a frame of data representing a patient. The data may be in any format. Although the terms image and imaging are used, the image or imaging data may be in a format prior to the actual display of the image. For example, the medical image may be a plurality of scalar values representing different locations in a cartesian or polar format different from the display format. As another example, the medical image may be a plurality of red, green, blue (e.g., RGB) values that are output to a display for generating an image in a display format. The medical image may be a currently or previously displayed image in a display or another format. An image or imaging is a data set that may be used for imaging, such as scan data representing a patient.
Any type of medical image and corresponding medical scanner may be used. In one embodiment, the medical image is a Computed Tomography (CT) image acquired with a CT system. For example, a chest CT dataset may be used to detect the bronchial tree, fissures, and/or blood vessels in the lungs. For CT, the raw data from the detector is reconstructed into a three-dimensional representation. As another example, Magnetic Resonance (MR) data representing a patient is acquired. MR data is acquired with an MR system. The data is acquired using a pulse sequence for scanning a patient. Data representing an internal portion of a patient is acquired. For MR, the magnetic resonance data is k-space data. Fourier analysis is performed to reconstruct the data from k-space into a three-dimensional object or image space. The data may be ultrasound data. The beamformer and transducer array acoustically scan a patient. Polar coordinate data is detected and processed to ultrasound data representing the patient. The data may be Positron Emission Tomography (PET), Single Photon Emission Computed Tomography (SPECT), or other nuclear imaging data. Radioactive emissions from within the patient are detected and reconstructed into imaging data.
The medical image represents tissue and/or bone structure of the patient. Alternatively, the medical image represents a flow, velocity or fluid within the patient. In other embodiments, the medical image represents both flow and structure. For PET and SPECT, the scan data represents functions of the tissue, such as uptake.
The medical image represents a one-, two-, or three-dimensional portion of a patient. For example, the medical image represents a region or slice of a patient as pixel values. A three-dimensional volume may be represented as pixel values by rendering into a two-dimensional format. As another example, the medical image represents a volume or three-dimensional distribution of voxels. A value is provided for each of a plurality of locations distributed in two or three dimensions. A medical image is acquired as a data frame. The data frame represents the scanned site at a given time or period. The data set may represent a region or volume over time, such as providing a 4D representation of the patient.
The image may include one or more artifacts. Different imaging modalities are sensitive to different types of artifacts. Physical processes (physics) used for scanning and/or processing to create images from the scan can generate artifacts. Motion of the patient or the sensor performing the scan may generate artifacts. Example artifacts in medical imaging include noise, blurring (e.g., motion artifacts), shadowing (e.g., obstructing or interfering with sensing), and/or undersampling artifacts.
There may be any level of artifacts. Scan settings for a medical scanner, patient condition, motion amount, filtering, reconstruction, other image processing, and/or other factors may contribute to different levels of artifacts in the image. An image may include one or more types of artifacts. The level may be a function of severity (e.g., intensity or contrast) and/or extent (e.g., distribution or number of instances).
The scoring is based on specific artifacts and/or general artifacts. The level of artifact in any given image is detected.
In act 32, the machine utilizes a depth-generating machine learning model to determine a probability map of artifacts as a function of the position of the image. Any machine capable of applying a deep-generation machine learning model may be used. For example, the computer inputs the image to one or more learned matrices that are learned as a depth-generating machine learning model.
Any machine learning may be used to generate the model. The generative model encodes the data into several independent latent variables and generates synthetic data by sampling the latent variables. In deep learning, latent variables are learned through machine training. For a generative model for image quality scoring, the model takes only the image as input, but other inputs may be provided, such as clinical data of the patient. The generative model returns a priori log-likelihoods and is implemented as a piecewise differentiable function such as that used in deep learning. For example, the generative model is a deep learning model using a restricted boltzmann machine, a deep belief network, a neural autoregressive density estimate, a variational autocoder, extensions thereof, or other deep learning methods for generative modeling. In one embodiment, the trained depth-generating model is a deep neural network with a set of j convolutional layers and k fully-connected layers (each followed by a nonlinear excitation function) and a set of pooling layers for feature reduction. Other layer arrangements may be used.
FIG. 3 illustrates one embodiment of a flow diagram for a method of training a machine to determine an image quality score. The method is implemented by a computer, workstation, server, or other processing component having access to a database of hundreds or thousands of example images with known quality scores and/or artifact types. The acts are performed in the order shown with any amount of time interval between acts 40 and 42. Additional or different actions may be provided.
In act 40, the machine learns a generative model from the images of the database. Using a piecewise differentiable function or other deep learning function, the machine trains a depth-generating model to output a spatial distribution of probabilities in response to an input image.
Images from the database used to train the depth-generating model are of similar quality, such as a desired good quality. The level of artifact or quality score is above or below a threshold level depending on whether a higher or lower score indicates better quality. All images used to train the generative model have good or top-level image quality. Any threshold for quality may be used to train the image, such as a score of only 5 in a score range of 1-5, where 5 is the best quality. In alternative embodiments, a wider range is used (e.g., medium level, low level, and/or no artifacts in the image). In yet further embodiments, images of any level of quality are used.
To train the generative model, the model is fed a set of good quality or similar quality images (e.g., determined by their scores). The log-likelihood of the output is maximized. The generative model encodes features that represent good quality in the image. Because the generation training is unsupervised, the training does not require matched good and poor image pairs, which are difficult to acquire on a large scale in a medical setting. To obtain both good and poor images, the patient needs to be scanned twice and additional doses and/or scan times are incurred that have no direct benefit to the patient.
The generated properties are used to determine a model of a good quality image. This overall data acquisition process results in a large amount of training data, and the discrimination method can then train each network only for images characterized by one particular kind of distortion. The generative model provides a feature or kernel for indicating a good quality image, which can be used for discriminant detection of any number of types of artifacts.
The generative model is trained with deep machine learning to output a probability that the input images match of good quality. Returning to act 32 of fig. 2, a probability map for the input image is determined. The probability map is a spatial distribution of probabilities of normality or abnormality. The anomalies reflect the likelihood of artifacts. The map is a spatial distribution, such as calculating a probability for each pixel or voxel based on the intensities or values of surrounding or neighboring pixels or voxels.
Model parameters (e.g., machine training features, kernel or layer values) are used to calculate the probability for each voxel that its intensity fits a good quality generative model. Voxels or pixels whose intensity and neighboring intensity distributions do not match those of the generative model will have a low probability, thus creating a map of potential anomalies. Voxels or pixels whose intensity and neighboring intensity distributions do match those of the generative model will have a high probability, thus creating a potentially normal map. Inverse probabilities may be used. The map is a spatial distribution of probabilities of normality and/or abnormality. Matching to a generated model of a poor or low quality image may be used in other embodiments.
The probability map is determined as a function of the log likelihood of the image locations matching the depth-generating machine learning model. The deep learning generative model code provides a log-likelihood of the input, expressed as:
Figure 941677DEST_PATH_IMAGE001
where L is the loss of training for the propulsion model, σ is the STD of the input distribution, Y is the true target image, AnIs a parameter set for the model, X is the input image, and log p (X) is the output of generating the model. For each voxel or pixel, a corresponding probability p (x) is calculated using the generative model. Other resolutions of the probability map may be used, such as a smaller spatial resolution for voxels or pixels.
The probabilities can be used as a probability map input to a discriminant classifier. Alternatively, the probability map is formulated as the deviation of the input image from the probability. For example, the probability map is computed as: deviation p (x) = 1-p (x), emphasizing deviation from the model of the normal image.
In act 34, the machine assigns a quality score for the image. The probability map is applied to a discriminative machine learning classifier. Other inputs may be used, such as clinical data for the patient and/or features extracted from the image rather than the probability map.
The score to be assigned is typically a global score or score for the artifact. Alternatively or additionally, separate scores (e.g., separate blur and noise scores) are provided for different types of artifacts. The severity and extent of the artifact may be scored separately or one score may be used that includes the level of artifact as a function of both severity and extent.
In one embodiment, the score S is expressed as
Figure 593238DEST_PATH_IMAGE002
With respect to the severity and extent of the artifact i (out of n artifact types)
Figure 107396DEST_PATH_IMAGE003
= (sev, ext). This provides a global score that is a function of the score for each type of artifact.
A discriminative machine-learned classifier is any type of machine-learned classifier that receives input features (e.g., feature values derived from a probability map) and outputs a classification (e.g., a score). Support vector machines, bayesian networks, probabilistic boosting trees, neural networks, sparse automatic code classifiers, or other now known or later developed machine learning may be used. Any semi-supervised, supervised or unsupervised learning may be used. Hierarchical, cascading, or other methods may be used.
In one embodiment, a neural network (e.g., a deep neural network) is used. Other deep-learned sparse automatic coding classifiers may be trained and applied. Machine training is unsupervised in learning the usage features and how to classify a given learned feature vector. Function(s)
Figure 846813DEST_PATH_IMAGE004
Is trained, where X is a probability map and
Figure 395606DEST_PATH_IMAGE005
is a model parameter (i.e., a network parameter) such that the predicted score isIs composed of
Figure 686911DEST_PATH_IMAGE006
Referring to FIG. 3, the machine trains a discriminative classifier in act 42. For example, deep neural networks are trained to take advantage ofL 2 Loss (e.g., least squares error) or other loss to estimate
Figure 281840DEST_PATH_IMAGE007
To obtain optimal network parameters. This training can be represented by the following equation:
Figure 816159DEST_PATH_IMAGE008
the difference between the true value or known score used to train the image and the prediction by the discriminant classifier is minimized.
The discriminant classifier is trained using training data. Samples of the input data with true values are used to learn to classify the scores. For deep learning, the classifier learns features of input data extracted from training data. Alternatively, the features (at least for the input) are programmed manually, such as filtering the scan data and inputting the results of the filtering. Training associates input data with a classification through one or more layers. One layer may associate feature values with categories. For deep learning networks, there may be additional layers that create additional abstract features from the output of previous layers. The resulting machine-trained classifier is a matrix of classifications and/or probabilities for inputs, weights, and combinations to output class membership. A deep machine trained classifier includes two or more layers that associate an input with a class.
The discriminative classifier is trained to output a score for image quality. Any score may be used. For example, a range of values representing mass is provided, such as 1-5 or 1-10, where larger or smaller numbers represent the highest mass. As another example, an alphanumeric category is used, such as poor or good, or such as poor, less than average, good or excellent.
The discriminative classifier is trained to assign classes based on input features from a spatial distribution of probabilities. For example, deep learning is performed. The input is the deviation of the input image from the depth-generating model. The discriminative classifier learns the features extracted from the probability map and learns to associate the values of the features with categories (i.e., scores). The features learned from deep learning are a (k, n) matrix of scores for the predictions. In additional or alternative embodiments, manually programmed features (e.g., Haar wavelets, steerable features, maximum detection) are extracted from the probability map as a matrix of input feature vectors.
Other input features may be used in addition to features derived from the probability map. For example, clinical data (e.g., family history, symptoms, test results, and/or diagnoses) are input or features derived therefrom are input. In one embodiment, features derived directly from the input image are used. In addition to the probability map, features of intensity in the image are computed. The feature is learned as part of deep learning and/or is a manually programmed feature. The training uses inputs of both spatial distribution of probabilities and deep learning or other features extracted from the input images.
After creation, the machine-learned discriminative classifier includes one or more layers. For manually programmed features, one layer is a network that associates features (e.g., (k, n) matrices) of one or more input vectors to classes. For deep learning networks, at least one feature layer is learned from training data rather than manual programming. More than two layers may be provided, such as a neural network having three or more layers.
In one embodiment, the depth regression is trained to estimate an image quality score based at least on the probability distribution. These probability-based features from the generative model may be combined or tied with features computed from an associated discriminant model, such as deep-learned features from the input image rather than the probability map.
FIG. 4 illustrates one embodiment of a machine learning discriminant classifier process chain. The discriminative classifier is a deep learning neural network. As with fig. 1, the image 20 is input to a machine learning generative model 22, which results in a probability map 24 of the likelihood of a deviation or anomaly from the generative model. The probability map 24 is further encoded for feature reduction in the fully-connected layer 50. A convolutional layer may be used instead of the fully-connected layer 50. Additional layers may be provided. The output of the fully-connected layer 50 is an input feature vector 56 of values derived from the probability map 24.
The image 20 is also input to a series of convolutional layers 52, which are output to a fully-connected layer 54. Additional, different, or fewer layers 52, 54 may be provided. The layers 52, 54 are trained using deep learning to extract features from the image 20. The output of the fully-connected layer 50 is an input feature vector 56 of values derived from the image 20 rather than the probability map. Other paths for creating the input feature vectors may be used.
The classifier 58 will assign an image quality score if the probability map and the image are applied to a discriminative machine learning classifier. A set of features 56 used by the discriminative machine learning classifier is derived from the probability map 24 and another set of features 56 used by the discriminative machine learning classifier is derived from the image 20 without using a generative model. One input vector is learned from the training image and the other input vector is learned from the training probability map.
Referring again to FIG. 2, the machine assigns a quality score in act 34. The discriminative classifier associates the value of a feature with a score through the application of one or more input feature vectors. The scoring category is based on training. Where the score used for training includes consideration of the severity and extent of the artifact, the score output by the classifier provides an indication of the severity and extent of the artifact in the input image for a particular patient and/or scan.
In addition to outputting the score, the classifier may output additional information. The probability of class membership may be output (e.g., 75% likelihood of good quality and 25% likelihood of poor quality).
In one embodiment, the discriminative classifier is trained as a multi-tasking classifier. A cascade or hierarchy of classifiers may be used in place of or as a multitask classifier. Any other category may be used for multi-classification. In one approach, the machine utilizes the scores to identify a type of artifact or multiple types of artifacts in act 36. For example, the discriminative classifier assigns a score of 4 for the image and identifies blurring artifacts. As another example, the discriminative classifier assigns a score of 3 and identifies blurring and oversampling artifacts. Separate scores may be output along with the corresponding artifact types. Severity and/or range may be indicated as a category. Multitasking training adds multiple penalties to obtain network parameters for multiple classes.
Generative models and/or discriminative classifiers are trained and used for a particular situation. Different generative models and/or discriminative classifiers are provided for different situations. For example, the model and/or classifier is specific to a diagnosis, artifact, scanning modality, tissue of interest, type of patient, or other environmental or clinical situation. In other embodiments, generative models and/or discriminative classifiers are trained on and applied over a series of scenarios. For example, the same generative model and classifier are used for any type of artifact in any tissue associated with a particular scanning modality.
In act 38, the machine transmits a quality score for the image of the patient. The transmission is over a network, through a communication interface, into a memory or database (e.g., to a computerized patient medical record), or to a display. For example, the image quality score is displayed with an image of the patient. The score is a comment, a pop-up portion, or a portion of the notification.
In one embodiment, the image quality score ranges from 1 to 5, from best to worst. The score is based on the presence of a particular artifact. Artifacts throughout the range and/or severity of the image may be reflected in the score. Other information, such as the type of artifact or other output of the multi-tasking classifier, may be conveyed along with the score.
Referring to fig. 4, a discriminative machine learning classifier, a depth-generating machine learning model, or both may be responsive to segmentation of the image. Fig. 4 shows segmentation information 60 input to the convolutional layer 52 for deriving feature values from the image. Alternatively or additionally, the segmentation information 60 is input to the generative model 22 or the fully-connected layer 50 for deriving feature values based on the probability map 24. Any of the generative models, features, and/or discriminative classifiers use this segmentation.
The segmentation information 60 is anatomical or other image information. The segmentation distinguishes between foreground and background or identifies anatomical locations (e.g., identifies anatomical symmetry). Any segmentation may be used, such as thresholding, boundary detection, or histogram analysis.
Anatomical information may be incorporated in the assessment, as some artifacts may be better seen in the background or foreground. Separate generative models, features, and/or discriminative classifiers may be applied to the foreground and background. The results may then be combined to provide a score for the image. Alternatively, separate results for the foreground and background are output.
Anatomical symmetries or locations (e.g., patches) with known relationships (e.g., similar or different tissues) may be used for comparison. Separate classes, probability maps or features may be used. The results may be compared. The comparison may be entered as a feature. In generating the model, training and application may use the comparison, as the comparison may indicate what is normal or abnormal.
Anatomical reference coordinate systems may be used for classification. An anatomically based coordinate system can be defined to normalize the localization in the image. This may allow for comparison. This normalization may scale or register (spatially up-convert) the training image or the input image to the same scale or alignment. Alternatively, the images are scaled and/or aligned to a precursor for use in training or application. The anatomy-based coordinates may be paired with the regressed region or patch so that the same anatomy is considered for each image.
Referring again to fig. 2, the user or medical scanner uses the image quality score. When possible, time, effort, and/or exposure to radiation (e.g., x-rays) for scanning is to be avoided. Images of sufficiently good quality allow a diagnosis that is free from the risk of errors. Poor quality images may not be sufficient for diagnosis, so the patient is scanned again. The scores are used again for scanning only when needed. Once the global image quality score or artifact-specific score is predicted, the operator of the medical scanner or the medical scanner may decide whether to rescan the patient (i.e., whether to repeat the generation of the medical image in act 30). The score is used for the decision whether to use or not use the generated image. The result is that later physician examinations are more likely to have images useful for diagnosis and to avoid rescanning where possible.
FIG. 5 illustrates one embodiment of a system for use in machine learning and/or for an application. The system is distributed between the imaging system 80 and the remote server 88. In other embodiments, the system is only the server 88 or only the imaging system 80 without the network 87. In yet further embodiments, the system is a computer or workstation.
The system includes an imaging system 80, a processor 82, a memory 84, a display 86, a communication network 87, a server 88, and a database 90. Additional, different, or fewer components may be provided. For example, a network connection or interface is provided, such as for networking with a medical imaging network or a data archiving system. In another example, a user interface is provided. As another example, the server 88 and the database 90 are not provided, or only the server 88 and the database 90 are provided. In other examples, the server 88 is connected to a number of imaging systems 80 and/or processors 82 via a network 87.
The processor 82, memory 84, and display 86 are part of the medical imaging system 80. Alternatively, the processor 82, memory 84, and display 86 are part of an archiving and/or image processing system separate from the imaging system 80, such as associated with a medical records database workstation or server. In other embodiments, the processor 82, memory 84, and display 86 are a personal computer (such as a desktop or laptop), a workstation, a server, a network, or a combination thereof. The processor 82, display 86 and memory 84 may be provided without other components for acquiring data by scanning a patient.
The imaging system 80, processor 82, memory 84 and display 86 are provided at the same location. The location may be the same room, the same building, or the same facility. These devices are local to each other and remote from server 88. The servers 88 are separated by the network 87 by being in different facilities or by being in different cities, counties, states or countries. The server 88 and database 90 are remote from the processor 82 and/or the location of the imaging system 80.
The imaging system 80 is a medical diagnostic imaging system. Ultrasound, Computed Tomography (CT), x-ray, fluoroscopy, Positron Emission Tomography (PET), Single Photon Emission Computed Tomography (SPECT), and/or Magnetic Resonance (MR) systems may be used. The imaging system 80 may include an emitter and include a detector for scanning or receiving data representative of the interior of the patient.
In one embodiment, imaging system 80 is a CT system. An x-ray source is coupled to the gantry. Opposite the x-ray source, a detector is also connected to the gantry. The patient is positioned between the source and the detector. The source and detector are on opposite sides of and rotate and/or translate with respect to the patient. The detected x-ray energy passing through the patient is converted, reconstructed or transformed into data representing different spatial locations within the patient.
In another embodiment, the imaging system 80 is an MR system. The MR system includes a main field magnet, such as a cryogenic magnet, and a gradient coil. A whole-body coil is provided for transmission and/or reception. A local coil may be used, such as for receiving electromagnetic energy emitted by atoms in response to a pulse. Other processing means may be provided, such as for planning and generating transmit pulses for the coils based on the sequence and for receiving k-space data and processing the received k-space data. The received k-space data is converted to object or image space data using fourier processing.
The memory 84 may be a graphics processing memory, video random access memory, system memory, cache memory, hard drive, optical media, magnetic media, flash drive, buffer, database, combinations thereof, or other now known or later developed memory device for storing data or video information. The memory 84 is part of the imaging system 80, part of a computer associated with the processor 82, part of a database, part of another system, picture archiving memory, or a separate device.
The memory 84 stores medical imaging data representing a patient, weights or values for parameters of some of the layers that make up the machine-learned classifier, outputs from different layers, one or more machine-learned matrices, and/or images. Memory 84 may store data for applications during processing and/or may store training data (e.g., images and scores).
Alternatively or additionally, the memory 84 or other memory is a non-transitory computer readable storage medium that stores data representing instructions executable by the programmed processor 82 for use in training or machine learning classifiers in medical imaging. Instructions for implementing the processes, methods, and/or techniques discussed herein are provided on non-transitory computer-readable storage media or memories, such as caches, buffers, RAMs, removable media, hard drives, or other computer-readable storage media. Non-transitory computer readable storage media include various types of volatile or non-volatile storage media. The functions, acts or tasks illustrated in the figures or described herein are performed in response to one or more sets of instructions stored in or on computer readable storage media. The functions, acts or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firmware, micro code and the like, operating alone or in combination. Likewise, processing strategies may include multiprocessing, multitasking, parallel processing and the like.
In one embodiment, the instructions are stored on a removable media device for reading by local or remote systems. In other embodiments, the instructions are stored in a remote location for transfer over a computer network or over telephone lines. In yet further embodiments, the instructions are stored within a given computer, CPU, GPU or system.
The processor 82 is a general purpose processor, central processing unit, control processor, graphics processor, digital signal processor, three-dimensional rendering processor, image processor, application specific integrated circuit, field programmable gate array, digital circuit, analog circuit, combinations thereof, or other now known or later developed device for training or applying machine learning classification. The processor 82 is a single device or a plurality of devices operating in series, in parallel, or separately. The processor 82 may be the main processor of a computer, such as a laptop or desktop computer, or may be a processor for handling certain tasks in a larger system, such as in the imaging system 80. The processor 82 is configured by instructions, designs, hardware, and/or software to perform the actions discussed herein.
The processor 82 is configured to perform the actions discussed above with respect to training or applications. The processor 82 uses the one or more memory matrices to generate a model for machine learning. A probability map is created by applying the input image to the generative model. The processor 82 derives features from the probability map. Features may be derived from other sources, such as input images. The processor 82 uses one or more memory matrices for a machine learning discriminative classifier. The score is output by application of the value of the feature to a discriminative classifier.
The processor 82 is configured to communicate the score, with or without other classifications, to the display 86 or to the memory 84 via the network 87. The processor 82 may be configured to generate a user interface for receiving a correction or verification of the classification result.
The display 86 is a monitor, LCD, projector, plasma display, CRT, printer, or other now known or later developed device for outputting visual information. The display 86 receives images, graphics, text, quantities, or other information from the processor 82, memory 84, imaging system 80, and/or server 88. One or more medical images are displayed. The image is an image of a part of a patient. The image includes an indication (such as a graphic or colorization) of the classification result (such as a global score, an artifact-specific score, and/or a type of artifact). The artifact may be localized or detected and highlighted, such as being detected as another category output by the discriminative classifier. In the absence of a medical image representation of the patient, the score may be displayed as an image.
The network 87 is a local area network, a wide area network, an enterprise network, another network, or a combination thereof. In one embodiment, the network 87 is at least partially the Internet. The network 87 provides communication between the processor 82 and the server 88 using TCP/IP communications. Any format for communication may be used. In other embodiments, dedicated or direct communication is used.
The server 88 is a processor or group of processors. More than one server 88 may be provided. The server 88 is configured by hardware and/or software. In one embodiment, the server 88 utilizes training data in the database 90 to perform machine learning. The machine learning matrix is provided to the processor 82 for application. The results of the classification may be received from processor 82 for use in further training. Alternatively, the server 88 performs an application on the image received from the imaging system 80 and provides the score to the imaging system 80.
The database 90 is a memory (such as a bank of memories) for storing training data (such as graphs and corresponding scores). The values or weights of the parameters that generate the model and/or discriminative classifier are stored in the database 90 and/or memory 84.
Although the invention has been described above by reference to various embodiments, it should be understood that many changes and modifications can be made without departing from the scope of the invention. It is therefore intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that it is the following claims, including all equivalents, that are intended to define the spirit and scope of this invention.

Claims (17)

1. A method for image quality scoring of images from a medical scanner, the method comprising:
generating, by the medical scanner, an image representative of a patient, the image having a level of artifact due to generation by the medical scanner;
generating, by a machine, a machine learning model using depth to determine a probability map of artifacts as a function of location of an image;
assigning, by the machine, a quality score for the image with the probability map applied to a discriminative machine learning classifier; and
transmitting a quality score for the image of the patient;
wherein assigning comprises assigning if a probability map and an image are applied to the discriminative machine learning classifier, a first set of features used by the discriminative machine learning classifier being derived from the probability map and a second set of features used by the discriminative machine learning classifier being derived from the image.
2. The method of claim 1, wherein generating comprises generating a computed tomography, magnetic resonance, ultrasound, positron emission tomography, or single photon emission computed tomography image.
3. The method of claim 1, wherein generating comprises generating a two-dimensional representation of pixels or a three-dimensional set of voxels that are images.
4. The method of claim 1, wherein generating comprises generating with noise artifacts, blurring artifacts, shadowing artifacts, undersampling artifacts, or a combination thereof.
5. The method of claim 1, wherein determining comprises determining with the depth-generating machine learning model learned with only training images having a quality above a threshold.
6. The method of claim 1, wherein determining comprises determining a probability map as a function of log likelihood of a location of an image matching the depth-generating machine learning model.
7. The method of claim 1, wherein determining comprises determining a probability map as a deviation from a normal image modeled by the depth-generating machine learning.
8. The method of claim 1, wherein assigning a quality score comprises: assigning using the discriminative machine learning classifier that includes a deep neural network.
9. The method of claim 1 wherein assigning comprises assigning a quality score as a function of the severity and extent of the artifact.
10. The method of claim 1, further comprising identifying a type of artifact, assigning and identifying being performed with the discriminative machine learning classifier as a multitask classifier, and wherein communicating comprises communicating a quality score and a type of artifact.
11. The method of claim 1, wherein the discriminative machine learning classifier, the depth-generating machine learning model, or both are responsive to segmentation of an image.
12. The method of claim 1, wherein transmitting comprises transmitting the quality score to a display with the image.
13. The method of claim 1, further comprising rescanning the patient with the medical scanner in response to the quality score.
14. A method for training a machine to determine an image quality score, the method comprising:
training, by the machine, a depth generation model using a piecewise differentiable function, the depth generation model being trained to output a spatial distribution of probabilities in response to an input image; and
training, by the machine, a discriminative classifier trained to output a score for image quality as a function of an input to a spatial distribution of probabilities;
wherein training the discriminative classifier comprises training using a spatially distributed input of probabilities and deep learning features extracted from the input image.
15. The method of claim 14, wherein training the depth generation model comprises training using images as training data, all images having a threshold level of image quality, the output being a probability of a match.
16. The method of claim 14, wherein training the discriminative classifier includes training using deep learning, in which a probability input is a deviation of an input image from the depth-generating model.
17. A method for image quality scoring of images from a medical scanner, the method comprising:
generating, by the medical scanner, an image representative of a patient, the image having a level of artifact due to the generation by the medical scanner;
generating, by a machine, a machine learning model using depth to determine a probability map of artifacts as a function of location of an image;
assigning, by the machine, a quality score for the image with a probability map applied to a discriminative machine learning classifier, the probability map comprising a first input vector and the features of the image comprising a second input vector; and
transmitting a quality score for the image of the patient;
wherein the discriminative machine learning classifier comprises a deep learning classifier, a second input vector learned from a training image, and a first input vector learned from a training probability map.
CN201710487019.7A 2016-06-23 2017-06-23 Image quality scoring using depth generation machine learning models Active CN107545309B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201662353737P 2016-06-23 2016-06-23
US62/353737 2016-06-23
US15/606069 2017-05-26
US15/606,069 US10043088B2 (en) 2016-06-23 2017-05-26 Image quality score using a deep generative machine-learning model

Publications (2)

Publication Number Publication Date
CN107545309A CN107545309A (en) 2018-01-05
CN107545309B true CN107545309B (en) 2021-10-08

Family

ID=60677327

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710487019.7A Active CN107545309B (en) 2016-06-23 2017-06-23 Image quality scoring using depth generation machine learning models

Country Status (2)

Country Link
US (1) US10043088B2 (en)
CN (1) CN107545309B (en)

Families Citing this family (51)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2912310T3 (en) 2016-01-05 2022-05-25 Reald Spark Llc Gaze Correction in Multiview Images
US10346740B2 (en) * 2016-06-01 2019-07-09 Kla-Tencor Corp. Systems and methods incorporating a neural network and a forward physical model for semiconductor applications
US10169647B2 (en) * 2016-07-27 2019-01-01 International Business Machines Corporation Inferring body position in a scan
EP3555850B1 (en) * 2016-12-15 2021-10-27 General Electric Company System and method for image segmentation using a joint deep learning model
ES2967691T3 (en) 2017-08-08 2024-05-03 Reald Spark Llc Fitting a digital representation of a head region
US11442910B2 (en) * 2017-09-28 2022-09-13 Intel Corporation Multiple order delta compression
EP3564962A1 (en) * 2018-04-30 2019-11-06 Koninklijke Philips N.V. Motion artifact prediction during data acquisition
CN111542853B (en) * 2017-10-31 2024-05-14 皇家飞利浦有限公司 Motion artifact prediction during data acquisition
US10698063B2 (en) * 2017-11-01 2020-06-30 Siemens Healthcare Gmbh Motion artifact reduction of magnetic resonance images with an adversarial trained network
US10706262B2 (en) * 2018-01-08 2020-07-07 3DLOOK Inc. Intelligent body measurement
CN108224895B (en) * 2018-01-08 2020-11-10 合肥美的智能科技有限公司 Article information input method and device based on deep learning, refrigerator and medium
US10650530B2 (en) * 2018-03-29 2020-05-12 Uveye Ltd. Method of vehicle image comparison and system thereof
US10643332B2 (en) * 2018-03-29 2020-05-05 Uveye Ltd. Method of vehicle image comparison and system thereof
WO2019211186A1 (en) * 2018-05-02 2019-11-07 Koninklijke Philips N.V. Generating a simulated image of a baby
US10878561B2 (en) * 2018-05-31 2020-12-29 General Electric Company Automated scanning workflow
US10795752B2 (en) * 2018-06-07 2020-10-06 Accenture Global Solutions Limited Data validation
CN108898591A (en) * 2018-06-22 2018-11-27 北京小米移动软件有限公司 Methods of marking and device, electronic equipment, the readable storage medium storing program for executing of picture quality
US10726548B2 (en) * 2018-06-25 2020-07-28 Bay Labs, Inc. Confidence determination in a medical imaging video clip measurement based upon video clip image quality
US10631791B2 (en) 2018-06-25 2020-04-28 Caption Health, Inc. Video clip selector for medical imaging and diagnosis
KR20200003444A (en) * 2018-07-02 2020-01-10 삼성전자주식회사 Method and device to build image model
US10991092B2 (en) * 2018-08-13 2021-04-27 Siemens Healthcare Gmbh Magnetic resonance imaging quality classification based on deep machine-learning to account for less training data
US10825149B2 (en) * 2018-08-23 2020-11-03 Siemens Healthcare Gmbh Defective pixel correction using adversarial networks
US10796181B2 (en) * 2018-09-18 2020-10-06 GE Precision Healthcare LLC Machine learning based method and system for analyzing image artifacts and imaging system failure
US10878311B2 (en) * 2018-09-28 2020-12-29 General Electric Company Image quality-guided magnetic resonance imaging configuration
US10803585B2 (en) 2018-10-09 2020-10-13 General Electric Company System and method for assessing image quality
CN109671051B (en) * 2018-11-15 2021-01-26 北京市商汤科技开发有限公司 Image quality detection model training method and device, electronic equipment and storage medium
CA3120480A1 (en) * 2018-11-24 2020-05-28 Densitas Incorporated System and method for assessing medical images
US10354205B1 (en) 2018-11-29 2019-07-16 Capital One Services, Llc Machine learning system and apparatus for sampling labelled data
CN109754447B (en) * 2018-12-28 2021-06-22 上海联影智能医疗科技有限公司 Image generation method, device, equipment and storage medium
CN109740667B (en) * 2018-12-29 2020-08-28 中国传媒大学 Image quality evaluation method based on quality sorting network and semantic classification
CN109741317B (en) * 2018-12-29 2023-03-31 成都金盘电子科大多媒体技术有限公司 Intelligent evaluation method for medical image
CN109741316B (en) * 2018-12-29 2023-03-31 成都金盘电子科大多媒体技术有限公司 Intelligent medical image film evaluation system
US10997475B2 (en) * 2019-02-14 2021-05-04 Siemens Healthcare Gmbh COPD classification with machine-trained abnormality detection
CN111626974B (en) * 2019-02-28 2024-03-22 苏州润迈德医疗科技有限公司 Quality scoring method and device for coronary angiography image sequence
JP2020156800A (en) * 2019-03-27 2020-10-01 ソニー株式会社 Medical arm system, control device and control method
US11531875B2 (en) * 2019-05-14 2022-12-20 Nasdaq, Inc. Systems and methods for generating datasets for model retraining
US11933870B2 (en) * 2019-06-19 2024-03-19 Siemens Healthineers Ag Contrast and/or system independent motion detection for magnetic resonance imaging
CN111127386B (en) * 2019-07-08 2023-04-18 杭州电子科技大学 Image quality evaluation method based on deep learning
CN110840482B (en) * 2019-10-28 2022-12-30 苏州佳世达电通有限公司 Ultrasonic imaging system and method thereof
CN110838116B (en) 2019-11-14 2023-01-03 上海联影医疗科技股份有限公司 Medical image acquisition method, device, equipment and computer-readable storage medium
CN112949344B (en) * 2019-11-26 2023-03-31 四川大学 Characteristic autoregression method for anomaly detection
US11348243B2 (en) 2020-01-24 2022-05-31 GE Precision Healthcare LLC Systems and methods for medical image style transfer using deep neural networks
CN111260647B (en) * 2020-03-12 2023-07-25 南京安科医疗科技有限公司 CT scanning auxiliary method based on image detection, computer readable storage medium and CT scanning device
CN111292378A (en) * 2020-03-12 2020-06-16 南京安科医疗科技有限公司 CT scanning auxiliary method, device and computer readable storage medium
CN111798439A (en) * 2020-07-11 2020-10-20 大连东软教育科技集团有限公司 Medical image quality interpretation method and system for online and offline fusion and storage medium
US11670072B2 (en) 2020-10-02 2023-06-06 Servicenow Canada Inc. Systems and computer-implemented methods for identifying anomalies in an object and training methods therefor
US11848097B2 (en) 2020-12-17 2023-12-19 Evicore Healthcare MSI, LLC Machine learning models for automated request processing
WO2022268850A1 (en) 2021-06-24 2022-12-29 Koninklijke Philips N.V. Out of distribution testing for magnetic resonance imaging
CN113610788B (en) * 2021-07-27 2023-03-07 上海众壹云计算科技有限公司 Fault monitoring method and device for image acquisition device, electronic equipment and storage medium
DE102021209169A1 (en) 2021-08-20 2023-02-23 Siemens Healthcare Gmbh Validation of AI-based result data
US20230079353A1 (en) * 2021-09-14 2023-03-16 Siemens Healthcare Gmbh Image correction using an invertable network

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6819790B2 (en) * 2002-04-12 2004-11-16 The University Of Chicago Massive training artificial neural network (MTANN) for detecting abnormalities in medical images
KR100474848B1 (en) * 2002-07-19 2005-03-10 삼성전자주식회사 System and method for detecting and tracking a plurality of faces in real-time by integrating the visual ques
US7664298B2 (en) * 2003-03-25 2010-02-16 Imaging Therapeutics, Inc. Methods for the compensation of imaging technique in the processing of radiographic images
US7860344B1 (en) * 2005-05-06 2010-12-28 Stochastech Corporation Tracking apparatus and methods using image processing noise reduction
US7813581B1 (en) * 2005-05-06 2010-10-12 Fitzpatrick Ben G Bayesian methods for noise reduction in image processing
US8866936B2 (en) * 2008-07-24 2014-10-21 Florida State University of Research Foundation Systems and methods for training an active random field for real-time image denoising
JP5147903B2 (en) * 2010-07-12 2013-02-20 キヤノン株式会社 Image processing apparatus, image processing method, and program
US8712157B2 (en) * 2011-04-19 2014-04-29 Xerox Corporation Image quality assessment
US8861884B1 (en) * 2011-11-21 2014-10-14 Google Inc. Training classifiers for deblurring images
CN104766299A (en) * 2014-12-26 2015-07-08 国家电网公司 Image quality assessment method based on probabilistic graphical model
US10430688B2 (en) * 2015-05-27 2019-10-01 Siemens Medical Solutions Usa, Inc. Knowledge-based ultrasound image enhancement
US9965719B2 (en) * 2015-11-04 2018-05-08 Nec Corporation Subcategory-aware convolutional neural networks for object detection
CN106874921B (en) * 2015-12-11 2020-12-04 清华大学 Image classification method and device

Also Published As

Publication number Publication date
CN107545309A (en) 2018-01-05
US20170372155A1 (en) 2017-12-28
US10043088B2 (en) 2018-08-07

Similar Documents

Publication Publication Date Title
CN107545309B (en) Image quality scoring using depth generation machine learning models
US10489907B2 (en) Artifact identification and/or correction for medical imaging
US10387765B2 (en) Image correction using a deep generative machine-learning model
CN110858391B (en) Patient-specific deep learning image denoising method and system
US10643331B2 (en) Multi-scale deep reinforcement machine learning for N-dimensional segmentation in medical imaging
US10991092B2 (en) Magnetic resonance imaging quality classification based on deep machine-learning to account for less training data
US20190088359A1 (en) System and Method for Automated Analysis in Medical Imaging Applications
US10667776B2 (en) Classifying views of an angiographic medical imaging system
EP3705047B1 (en) Artificial intelligence-based material decomposition in medical imaging
KR20230118667A (en) Systems and methods for evaluating pet radiographic images
US20230079353A1 (en) Image correction using an invertable network
US20230214664A1 (en) Learning apparatus, method, and program, image generation apparatus, method, and program, trained model, virtual image, and recording medium
US11933870B2 (en) Contrast and/or system independent motion detection for magnetic resonance imaging
US20240005484A1 (en) Detecting anatomical abnormalities by segmentation results with and without shape priors
US20230360366A1 (en) Visual Explanation of Classification
US20230368913A1 (en) Uncertainty Estimation in Medical Imaging
How Deep learning application on head CT images
Jin A Quality Assurance Pipeline for Deep Learning Segmentation Models for Radiotherapy Applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant