US10600181B2 - Systems and methods for probabilistic segmentation in anatomical image processing - Google Patents

Systems and methods for probabilistic segmentation in anatomical image processing Download PDF

Info

Publication number
US10600181B2
US10600181B2 US15/852,119 US201715852119A US10600181B2 US 10600181 B2 US10600181 B2 US 10600181B2 US 201715852119 A US201715852119 A US 201715852119A US 10600181 B2 US10600181 B2 US 10600181B2
Authority
US
United States
Prior art keywords
patient
anatomical structure
boundary
segmentation
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US15/852,119
Other versions
US20180182101A1 (en
Inventor
Peter Kersten PETERSEN
Michiel Schaap
Leo Grady
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HeartFlow Inc
Original Assignee
HeartFlow Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HeartFlow Inc filed Critical HeartFlow Inc
Priority to US15/852,119 priority Critical patent/US10600181B2/en
Publication of US20180182101A1 publication Critical patent/US20180182101A1/en
Assigned to HEARTFLOW, INC. reassignment HEARTFLOW, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PETERSEN, PETER KERSTEN, GRADY, LEO, SCHAAP, MICHIEL
Priority to US16/790,037 priority patent/US11443428B2/en
Application granted granted Critical
Publication of US10600181B2 publication Critical patent/US10600181B2/en
Assigned to HAYFIN SERVICES LLP reassignment HAYFIN SERVICES LLP SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEARTFLOW, INC.
Priority to US17/817,737 priority patent/US20220383495A1/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06K9/6277
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/143Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/50Clinical applications
    • A61B6/504Clinical applications involving diagnosis of blood vessels, e.g. by angiography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • G06T2207/30104Vascular flow; Blood flow; Perfusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30172Centreline of tubular or elongated structure

Definitions

  • Various embodiments of the present disclosure relate generally to medical imaging and related methods. More specifically, particular embodiments of the present disclosure relate to systems and methods for performing probabilistic segmentation in anatomical image analysis.
  • a common approach for automating segmentation may include using a learning based system.
  • a learning based system may be trained to predict a class label probability for each image element (e.g., pixel, voxel, or object parameter). For instance, a shape may be fitted, or a structured output model may be applied to assign a label to each image element based on its predicted label probability.
  • a learning based system may also be trained to estimate, for an image, the locations of boundaries of different structures within the image. In some cases, these estimated boundaries may not align with the boundaries between discrete image elements. The final segmentation may reflect a likely segmentation for a given image.
  • segmentation boundary locations may be plausible. Even human technicians may select or draw different boundary locations due to ambiguous image data.
  • boundaries estimated by a trained system may be ambiguous due to a lack of appropriate training examples or due to a sub-optimally trained method. In this case, the trained system may not provide accurate object boundaries.
  • a final segmentation often does not indicate whether the segmentation is a nearly definite accurate representation of a portion of anatomy, or whether an alternative segmentation is almost equally likely to render an accurate representation of anatomy. Further, the final segmentation may not indicate region(s) where the final segmentation boundary was more certain or less certain. Accordingly, a desire exists to understand the statistical confidence (or statistical uncertainty) of a provided segmentation. A desire also exists for generating alternative or aggregated solutions segmentations.
  • the present disclosure is directed to overcoming one or more of the above-mentioned problems or interests.
  • One method of performing probabilistic segmentation in anatomical image analysis includes: receiving a plurality of images of an anatomical structure; receiving one or more geometric labels of the anatomical structure; generating a parametrized representation of the anatomical structure based on the one or more geometric labels and the received plurality of images; mapping a region of the parameterized representation to a geometric parameter of the anatomical structure; receiving an image of a patient's anatomy; and generating a probability distribution for a patient-specific segmentation boundary of the patient's anatomy, based on the mapping of the region of the parameterized representation of the anatomical structure to the geometric parameter of the anatomical structure.
  • a system for performing probabilistic segmentation in anatomical image analysis comprises: a data storage device storing instructions for performing probabilistic segmentation in anatomical image analysis; and a processor configured for: receiving a plurality of images of an anatomical structure; receiving one or more geometric labels of the anatomical structure; generating a parametrized representation of the anatomical structure based on the one or more geometric labels and the received plurality of images; mapping a region of the parameterized representation to a geometric parameter of the anatomical structure; receiving an image of a patient's anatomy; and generating a probability distribution for a patient-specific segmentation boundary of the patient's anatomy, based on the mapping of the region of the parameterized representation of the anatomical structure to the geometric parameter of the anatomical structure.
  • a non-transitory computer readable medium for use on a computer system containing computer-executable programming instructions for performing a method of probabilistic segmentation in anatomical image analysis.
  • the method includes: receiving a plurality of images of an anatomical structure; receiving one or more geometric labels of the anatomical structure; generating a parametrized representation of the anatomical structure based on the one or more geometric labels and the received plurality of images; mapping a region of the parameterized representation to a geometric parameter of the anatomical structure; receiving an image of a patient's anatomy; and generating a probability distribution for a patient-specific segmentation boundary of the patient's anatomy, based on the mapping of the region of the parameterized representation of the anatomical structure to the geometric parameter of the anatomical structure.
  • FIG. 1 is a block diagram of an exemplary system and network for performing probabilistic segmentation in image analysis, according to an exemplary embodiment of the present disclosure.
  • FIGS. 2A and 2B are flowcharts of an exemplary method for performing probabilistic segmentation using parameterized object boundaries, according to an exemplary embodiment of the present disclosure.
  • FIGS. 3A and 3B are flowcharts of an exemplary embodiment of the method of FIGS. 2A and 2B , as applied to probabilistic segmentation of blood vessels, according to an exemplary embodiment of the present disclosure.
  • FIGS. 4A and 4B are flowcharts of an exemplary embodiment of the method for performing probabilistic segmentation using local image descriptors, according to an exemplary embodiment of the present disclosure.
  • medical image segmentation may involve rendering various segmentation boundary locations.
  • a final segmentation includes a likely segmentation or boundary for an object of interest (e.g., an anatomical structure) represented in a given image.
  • object of interest e.g., an anatomical structure
  • many other segmentation boundary locations may be plausible as well.
  • class label probability for image elements used in segmentation may be post-processed, a desire exists to understand the confidence or uncertainty of a final segmentation. Further, a desire exists to provide alternative solution(s) to the final segmentation.
  • the present disclosure is directed to providing information on the statistical confidence or statistical uncertainty of a segmentation by predicting a probability distribution of segmentation boundary locations.
  • the present disclosure may include both a training phase and a testing (or usage or production) phase to estimate a segmentation boundary and/or probability density function for the segmentation boundary.
  • the disclosed systems and methods may be applied to segmenting anatomy in received image(s) of a patient of interest and determining the probability density function for the boundary of the segmented anatomy.
  • the training phase may include developing a set of probability density functions for at least one parameter of an object parameterization associated with received images.
  • the training phase may involve receiving a collection of images, receiving or inputting information of an anatomical part or portion shown in each of the images (e.g., a localized anatomy for each of the images), defining a parameterization the anatomical part or portion, and generating a probability density function for each parameter of the parameterization, based on the received information.
  • An output from the training phase may include a probability density function for each parameter of an anatomical or medical model.
  • a testing phase may include receiving images of a patient's anatomy.
  • the patient may be a patient of interest, e.g., a patient desiring a diagnostic test.
  • the testing phase may involve completing a segmentation (e.g., rendering a segmentation boundary or surface reconstruction) of the patient's anatomy based on the received images and using the stored training phase probability density function(s) to compute a patient-specific probability density function for a parameter of the completed segmentation.
  • a segmentation e.g., rendering a segmentation boundary or surface reconstruction
  • the term “exemplary” is used in the sense of “example,” rather than “ideal.” Although this exemplary embodiment is written in the context of medical image analysis, the present disclosure may equally apply to any non-medical image analysis or computer vision evaluation.
  • FIG. 1 depicts a block diagram of an exemplary environment of a system and network for performing probabilistic segmentation.
  • FIG. 1 depicts a plurality of physicians 102 and third party providers 104 , any of whom may be connected to an electronic network 100 , such as the Internet, through one or more computers, servers, and/or handheld mobile devices.
  • Physicians 102 and/or third party providers 104 may create or otherwise obtain images of one or more patients' cardiac, vascular, and/or organ systems.
  • the physicians 102 and/or third party providers 104 may also obtain any combination of patient-specific information, such as age, medical history, blood pressure, blood viscosity, etc.
  • Physicians 102 and/or third party providers 104 may transmit the cardiac/vascular/organ images and/or patient-specific information to server systems 106 over the electronic network 100 .
  • Server systems 106 may include storage devices for storing images and data received from physicians 102 and/or third party providers 104 .
  • Server systems 106 may also include processing devices for processing images and data stored in the storage devices.
  • the probabilistic segmentation of the present disclosure may be performed on a local processing device (e.g., a laptop), absent an external server or network.
  • FIGS. 2A and 2B describe one embodiment for performing probabilistic segmentation using a learning system trained with parameterized object boundaries.
  • FIGS. 3A and 3B are directed to specific embodiments or applications of the methods discussed in FIGS. 2A and 2B .
  • FIG. 3A and FIG. 3B describe an embodiment for the probabilistic segmentation of blood vessels using a learning system trained on parameterized object boundaries.
  • FIGS. 4A and 4B describe another probabilistic segmentation method which employs a learning system trained using local image descriptors.
  • the methods of FIGS. 4A and 4B permit segmenting an image without a regressor or simplifying transformation. All of the methods may be performed by server systems 106 , based on information, images, and data received from physicians 102 and/or third party providers 104 over electronic network 100 .
  • FIGS. 2A and 2B describe an embodiment for predicting and testing a probability density function. Understanding a probability density function in association with a segmentation may be helpful for understanding whether the resulting segmentation is reliable, or whether portion(s) of the resulting segmentation are possibly inaccurate. Since analyses may be performed based on the segmentation, the present disclosure may provide an understanding of where or how the analyses could be inaccurate due to the segmentation. For example, accuracy of segmentation of a large vessel may impact blood flow analyses, whereas accuracy of segmentation in a small downstream vessel may have little influence on the accuracy of a predicted blood flow value. A probability density function could help determine whether to use a segmentation for an analysis, or which part of a segmentation could be used reliably for an analysis.
  • probabilistic segmentation may include two phases: a training phase and a testing phase.
  • the training phase may involve training a learning system (e.g., a deep learning system) to predict a probability density function (PDF) for a parameter of a segmentation/object parameterization.
  • PDF probability density function
  • the testing phase may include predicting a probability density function for a parameter of an object parameterization of a newly received image.
  • FIG. 2A is a flowchart of an exemplary training phase method 200 for training a learning system (e.g., a deep learning system) to predict a probability density function, according to various embodiments.
  • Method 200 may provide the basis for a testing phase or production phase in method 210 of FIG. 2B , for the probabilistic segmentation of an imaged object of interest of a specific patient.
  • step 201 of method 200 may include receiving one or more images in an electronic storage medium (e.g., hard drive, network drive, cloud drive, mobile phone, tablet, database, etc.).
  • an electronic storage medium e.g., hard drive, network drive, cloud drive, mobile phone, tablet, database, etc.
  • these images may, for instance, be from a medical imaging device, e.g., computed tomography (CT), positron emission tomography (PET), single-photon emission computerized tomography (SPECT), magnetic resonance imaging (MRI), microscope, ultrasound, (multi-view) angiography, etc.
  • CT computed tomography
  • PET positron emission tomography
  • SPECT single-photon emission computerized tomography
  • MRI magnetic resonance imaging
  • microscope ultrasound
  • multiple images for a single patient may be used.
  • the images may comprise a structure of a patient's anatomy.
  • the images may be of numerous individuals having similar anatomical features or numerous individuals having different anatomical features.
  • these images may be from any source, e.g., a camera, satellite, radar, lidar, sonar, telescope, microscope, etc.
  • images received in step 201 may be referred to as, “training images.”
  • step 203 may include receiving an annotation for one or more structures of interest shown in one or more of the training images.
  • all training images may be annotated.
  • This type of embodiment may be referred to as, “supervised learning.”
  • Another embodiment may include only a subset of the training images with annotations. This type of scenario may be referred to as, “semi-supervised learning.”
  • the structures of interest may include a blood vessel or tissue of the patient.
  • annotation(s) may include labels for vessel names (e.g., right coronary artery (RCA), left anterior descending artery (LAD), left circumflex artery (LCX), etc.), vessel landmarks (e.g., aortic valve point, ostia points, bifurcations, etc.), estimated vessel location, flags (e.g., noted portions where imaging is ambiguous or boundaries or unclear), etc.
  • vessel names e.g., right coronary artery (RCA), left anterior descending artery (LAD), left circumflex artery (LCX), etc.
  • vessel landmarks e.g., aortic valve point, ostia points, bifurcations, etc.
  • estimated vessel location e.g., noted portions where imaging is ambiguous or boundaries or unclear
  • step 205 may include defining a parameterization for an object boundary of the structure of interest.
  • step 205 may include defining whether to generate an implicit or an explicit surface representation as an object boundary of the structure of interest.
  • Defining the object boundary of the structure of interest may include subdividing the structure of interest into one or more portions and estimating the boundary of each of the portions.
  • the structure of interest may be subdivided into one or more segments, and the boundary of each of the segments may be estimated.
  • Estimating the boundary may include estimating the location of the structure (or a surface representation of the structure), for example, by estimating the probability that a certain image region or voxel includes a representation of the structure.
  • a person's face may be defined as the structure of interest and step 205 may include defining a desired parameterization for the person's face.
  • the desired parameterization may include a 3D surface model of the person's face.
  • Step 205 may include generating an object boundary of the person's face as a 3D surface model.
  • step 205 may include defining a point in the image and estimating a probability that the point is part of the portion of the image showing the face. Once probability estimates are computed for multiple points, portions of the image showing the person's face may be distinguished from portions of the image that do not show the person's face.
  • extracting a parameterization of an object of interest comprising a face may include receiving annotations for locations of facial landmarks (e.g., eyes, the point of the nose, the corners of the mouth, etc.).
  • Each of the landmarks K may be represented by a 2D position in a given image.
  • the 2D positions of each of the landmarks may be concatenated in a single K*2 dimensions vector, where the vector may be used as a parameterization of the face.
  • a 3D surface model of the person's face may be constructed from the generated probabilities and the image.
  • the 3D surface model of the person's face may constitute an object boundary.
  • step 205 may include defining a parameterization for an object boundary of a left ventricle myocardium.
  • step 205 may include receiving a series of labeled cardiac computed tomography angiography (CTA) images, where each image voxel may be labeled as being inside or outside the left ventricle myocardium.
  • CTA cardiac computed tomography angiography
  • Step 205 may then further include deforming the received images into a common space with elastic image registration and recording the transformation for each image (T i ).
  • a set of K points may be distributed over the surface of the annotated myocardium in the common space, and the inverse transformation (T i ⁇ 1 ) may be used to deform the set points back to the original image space.
  • the K 3D points may be concatenated into a K*3 dimensional vector and used as a parameterization of the left ventricle myocardium.
  • step 205 may include defining a parameterization for an object boundary of a vessel lumen.
  • step 205 may include receiving a surface model for the vessel lumen and a set of paths (e.g., centerlines) defining the approximate location of the center of the vessel.
  • step 205 may include defining a set of K rays extending from the vessel centerline, where the K rays comprise equal angularly distributed rays in the plane cross-sectional to the centerline. These rays may be intersected with the surface model of the lumen and the distance from the centerline to the intersection point may be recorded. These K distances may be concatenated in a K dimensional vector and used as the parameterization of a cross-sectional vessel shape.
  • the parameterization of the object boundary may include estimating the location of an object other than the structure of interest or estimating the object boundary using a parameter that is not on the boundary or surface of the structure of interest.
  • parameterization of the object boundary using an indication of the object boundary may include estimating a centerline location.
  • the parameterization may be performed by estimating the location of medial axes.
  • Medial axes may include a set of points of the structure of interest that have multiple closest points on the structure's boundary.
  • a medial axis may connect several points that are equidistant to more than one point on the structure's boundary.
  • medial axes may exist inside the boundaries/surfaces of the structure.
  • the medial axes may form a skeleton for the structure of interest.
  • Parameterization using a medial axes may also include finding, for at least one point on the medial axes, a distance from the point to the boundary of the structure of interest.
  • step 205 may include defining a desired type of parameterization for the object boundary of a structure of interest, as well as estimating (or parameterizing) the object boundary of the structure of interest.
  • step 207 may include training a model to predict a probability density function (PDF) for the object boundary parameterization of the training data (e.g., images and annotations of steps 201 and 203 ).
  • PDF probability density function
  • the training may predict a PDF of a joint distribution of the object boundary parametrization, a PDF for each parameter of the object boundary parameterization, etc.
  • a variety of machine learning techniques may be used to train the model.
  • training the model may include approximating the PDF of a parameter of the object boundary parametrization by using a mixture model (e.g., using a mixture density model).
  • a mixture model e.g., using a mixture density model
  • the landmark points may indicate a point estimate for a distance from a given input location (e.g., the center of an input patch) to the object boundary in a given direction.
  • the input patch may be comprised of a 2D frame of a curvilinear planar representation (e.g., a 2D rectangle extracted roughly orthogonal to the direction of a vessel centerline).
  • Step 207 may include training a model to predict a point estimate/value for the distance, as well as determining a statistical value indicating the certainty of the predicted distance value.
  • determining the statistical value may include modeling a statistical uncertainty in the distance value using a mixture of Gaussians (MoG).
  • step 207 may include training a model to learn to predict weighting coefficients, mean values, and standard deviations to describe the MoG.
  • the MoG may be combined with a neural network structure in the form of a mixture density network (MDN).
  • MDN mixture density network
  • the combined MoG and neural network structure may be optimized with a backpropagation algorithm using stochastic gradient descent. The optimization may include optimizing toward any of several local minima (rather than a global optimum).
  • the local minimum that is reached may be dictated by many factors, e.g., the neural network structure, the parameter initialization, the learning schedule, the type of optimization, etc. Any machine learning technique that is able to predict a multi-variate output (e.g. variants of deep learning, support vector machine, random forests, etc.) could be applied in this optimization step.
  • the conditional mean e.g., expected or ideal value
  • step 207 of training a model to predict a PDF may include approximating a PDF by modeling a distribution for latent variables and sampling multiple predictions from the modeled distribution.
  • training the model may include approximating PDF variational models (e.g., semi-supervised variational autoencoders). These PDF variational models may predict object boundary parameters, based on randomly sampled input parameters.
  • the PDF variational models may also be conditioned on image feature values.
  • the spread of these predictions may be summarized by a standard deviation.
  • such a standard deviation may represent the level of uncertainty of one or more segmentations of the object boundary.
  • step 207 may include training a model to predict a PDF by computing standard deviations as described above.
  • Step 209 may include saving the results of the model, including the predicted PDF of each parameter of the object parameterization, e.g., to a physical data storage device, cloud storage device, etc.
  • FIG. 2B is a block diagram of an exemplary testing phase (or usage or “production phase”) method 210 for predicting the certainty of a segmentation for a specific patient image, according to an exemplary embodiment of the present disclosure.
  • the uncertainty of the segmentation may be predicted using a trained model (e.g., from method 200 ).
  • the uncertainty may be conveyed by a probability density function.
  • step 211 may include receiving one or more medical images in an electronic storage medium (e.g., hard drive, network drive, cloud drive, mobile phone, tablet, database, etc.).
  • the images may include medical images, e.g., images may be from any medical imaging device, e.g., CT, MR, SPECT, PET, microscope, ultrasound, (multi-view) angiography, etc.
  • training images e.g., of method 200
  • step 211 may include receiving images also of that one patient.
  • step 211 may include receiving one or more images from a non-medical imaging device, e.g., a camera, satellite, radar, lidar, sonar, telescope, microscope, etc.
  • images received during step 211 may be referred to as “testing images” or “production images.”
  • step 213 may include initializing the trained model (of method 200 ) for use on the patient-specific image of step 211 .
  • the trained model may predict a PDF for a parameter of an object parameterization of a structure of interest.
  • Step 213 may include initializing the trained model by providing an object parameterization for the trained model's PDF prediction.
  • step 213 may include receiving an estimate of the segmentation boundary or an estimate of the location or boundary of another object.
  • a known segmentation algorithm may be applied to (patient-specific) image(s) of step 211 .
  • an object boundary of a structure of interest in the patient-specific image(s) may be approximated by its medial axis (e.g., a centerline), using a centerline extraction algorithm from the literature.
  • step 215 may include using the trained model to predict a PDF for a parameter of the object parameterization. For instance, if an MDN is used, the predictions may describe the parameters of a mixture of Gaussians which may approximate the PDF. For example, an MDN may be optimized during training (e.g., method 200 ) to match the training data. Then, during testing, the trained model may predict the MDN that best matches the received patient-specific image(s). In one embodiment, step 215 may include using the trained model to predict the PDF for each parameter of the object parameterization.
  • step 217 may include selecting a representative value from the PDF for a parameter of the object parameterization. This step may be performed with any number of standard methods, e.g., maximum a posteriori, maximum likelihood, etc. Alternately or in addition, step 217 may include resolving a complete segmentation boundary based on the PDF. In one embodiment, resolving a complete segmentation boundary may be performed using surface reconstruction. For instance, a 3D surface reconstruction algorithm (e.g., Poisson surface reconstruction) may be used to reconstruct a surface from a cloud of predicted segmentation boundary locations. Specific variants of a 3D surface reconstruction algorithm can be designed that fully utilize the PDF without first selecting a representative value from the PDF.
  • a 3D surface reconstruction algorithm e.g., Poisson surface reconstruction
  • one embodiment could include using an algorithm similar to a mean-shift mode finding algorithm comprising an iterative process of reconstructing the boundary and refining the PDF.
  • an initial representative value may be selected, surface reconstruction may be performed with a known reconstruction technique, and the PDF may be reweighted based on how closely the PDF resembles the reconstructed surface. This process may be repeated until convergence.
  • the advantage of this process is that locally incorrect estimates of the PDF may be corrected by the global nature of the surface reconstruction technique.
  • a current approach may result in a prediction of 2 mm for the single (local) point and 3 mm for the surrounding portions of the lumen.
  • 2 mm may be the prediction for the local point in an early iteration, but calculations may quickly converge to a prediction of 3 mm because of the neighboring distances.
  • this structure and geometry may serve as prior knowledge to constrain the space of possible reconstructions.
  • tubular structures may be represented by a set of ellipse landmarks along a centerline.
  • step 217 may include outputting one or both of the complete segmentation boundary and/or the predicted PDF for one or more parameters of the object parameterization.
  • the output may be stored in an electronic storage medium (e.g., hard drive, network drive, cloud drive, mobile phone, tablet, database, etc.) and/or displayed (e.g., via an interface, screen, projection, presentation, report, dashboard, etc.).
  • Computed PDFs may have several applications or uses.
  • a PDF may be used to compute a confidence score (e.g., by computing a conditional standard deviation).
  • step 217 may include outputting the confidence score to an electronic display and/or an electronic storage device (hard drive, network drive, cloud storage, portable disk, etc.).
  • the confidence score may be used in multiple ways.
  • the confidence score or samples from the PDF may be used to guide a human expert or computational algorithm to focus on uncertain or ambiguous image/model/segmentation regions that may be analyzed more exhaustively. For example, regions of a segmentation boundary (e.g., object boundary of a structure of interest) that have confidence scores below a predetermined threshold score may be highlighted in a visual display.
  • regions of a segmentation boundary that have confidence scores below a predetermined threshold score may be prioritized in a sequence of displays prompting user validation of the segmentation. Prioritization of the display of the regions may include displaying the low-confidence regions for validation, prior to displaying regions with higher confidence scores. Higher confidence scores may be defined as confidence scores exceeding the predetermined threshold score.
  • a model validation workflow may display regions of a segmentation boundary based on the lowest to the highest confidence scores. For example, the workflow may prompt a technician to examine or validate regions of a segmentation boundary that have the lowest confidence scores, prior to reviewing regions of the boundary with higher confidence scores. One exemplary workflow may alert the technician only to regions of the segmentation boundary that have confidence scores below a predetermined threshold score. Alternately or in addition, the display may highlight portions with high confidence scores and offer only these portions as segments of the model that may be used for diagnostic evaluations.
  • computed confidence score or samples from the PDF may also be used as a probabilistic input for determining physiological/biomechanical properties at one or more locations in a vascular or organ/muscle model (e.g., a blood flow characteristic including flow, velocity, pressure, fractional flow reserve (FFR), instantaneous fractional flow reserve (iFFR), axial stress, wall shear stress, strain, force, shape, size, volume, tortuosity, etc.).
  • a blood flow characteristic including flow, velocity, pressure, fractional flow reserve (FFR), instantaneous fractional flow reserve (iFFR), axial stress, wall shear stress, strain, force, shape, size, volume, tortuosity, etc.
  • FFR fractional flow reserve
  • iFFR instantaneous fractional flow reserve
  • axial stress axial stress
  • wall shear stress strain, force, shape, size, volume, tortuosity, etc.
  • the uncertainty of lumen boundary points may be used to model variations of the lumen geometry.
  • the PDF may provide capabilities to compute an uncertainty of FFR (axial stress, iFFR, etc.) computed based on the uncertain vessel size.
  • FFR axial stress, iFFR, etc.
  • the PDF may provide an understanding of confidence interval or probability density functions of physiological/biomechanical properties computed based on a segmentation boundary (with a predicted PDF).
  • the probabilistic segmentation model and the biophysiological model could be trained in sequence or simultaneously in one end-to-end approach.
  • the confidence score may further be used as an input for a segmentation algorithm for weighting the training data.
  • training may include sampling certain training samples (e.g., training segmentations with a high degree of accuracy for the structure of interest) at a different frequency or at different iterations than uncertain training examples (e.g., training segmentations with a low degree of accuracy for the structure of interest).
  • the confidence score may help determine which sets of training data should have more impact on the training of the probability density function model (e.g., of method 200 ).
  • the confidence score may also be used to quantify the quality of a reconstruction algorithm. Quantifying the quality of a reconstruction algorithm may provide indication of a preferred segmentation. For instance, a preferred reconstruction algorithm may include a reconstruction algorithm that produces an image which leads to a more certain segmentation over image(s) produced by other reconstruction algorithms. Similarly, for various models, the confidence score may be used to select among different parameterized models. A model (e.g., a complete segmentation boundary) with higher confidence in a segmentation may be preferred. In one embodiment, multiple different models may also be presented to a user for review and/or selection, possibly with the guidance of the confidence score information. Lastly, the PDF or confidence score may be used to build an integrated model/integrating multiple images to produce a resultant model/image, reconstruction, or enhancement that meets or exceeds a given, target PDF or confidence score.
  • a preferred reconstruction algorithm may include a reconstruction algorithm that produces an image which leads to a more certain segmentation over image(s) produced by other reconstruction algorithms.
  • the confidence score may be used to select among different parameterized
  • FIGS. 3A and 3B are directed to specific embodiments or applications of the exemplary methods discussed in FIGS. 2A and 2B .
  • FIG. 3A and FIG. 3B respectively, describe an exemplary training phase and testing phase for performing probabilistic segmentation of blood vessels.
  • the accuracy of patient-specific segmentation of blood vessels may impact several medical assessments, including blood flow simulations, calculations of geometric characteristics of blood vessels, plaque rupture predictions, perfusion predictions, etc.
  • a single segmentation output of a blood vessel may be provided by automated algorithms, several uncertainties may exist about the precise location of a segmentation boundary. Even two human experts may favor different annotations of a vessel boundary due to ambiguous image content.
  • method 300 may model the uncertainty (or the confidence) in a generated vessel segmentation boundary by modeling a probability density function (PDF) around the vessel segmentation boundary.
  • PDF probability density function
  • Such a probabilistic segmentation model could be applied to model the uncertainty of plaque, tissue, organ or bone segmentation, where the segmentation model may capture multiple plausible segmentation realizations or summarize the variability of the model.
  • FIG. 3A is a flowchart of an exemplary method 300 for a training phase designed to provide the basis for performing probabilistic segmentation of a patient's blood vessels, according to various embodiments.
  • step 301 may include receiving one or more images of coronary arteries in an electronic storage medium (e.g., hard drive, network drive, cloud drive, mobile phone, tablet, database, etc.). These images may, for instance, be from a medical imaging device, such as CT, MR, SPECT, PET, ultrasound, (multi-view) angiography, etc. These images may be referred to as “training images.”
  • an electronic storage medium e.g., hard drive, network drive, cloud drive, mobile phone, tablet, database, etc.
  • These images may, for instance, be from a medical imaging device, such as CT, MR, SPECT, PET, ultrasound, (multi-view) angiography, etc. These images may be referred to as “training images.”
  • step 303 may include receiving annotations of the vessel lumen boundary and the vessel lumen centerline(s) of each of the training images.
  • step 303 may include receiving or generating a geometric mesh of the coronary vessels represented in the received images.
  • the geometric mesh may be specified as a set of vertices and edges.
  • step 303 may include receiving a centerline of the coronary vessels.
  • the centerline may also be represented as a set of vertices that may be connected by edges.
  • step 305 may include transforming the training image data (e.g., the geometric mesh, vertices, edges, centerline, etc.) into a curvilinear planar representation (CPR).
  • CPR curvilinear planar representation
  • a set of planes e.g., frames
  • the centerline e.g., orthogonal to the centerline
  • the 3D volume may comprise a curvilinear planar representation (CPR), with a coordinate system frame of reference defining two dimensions and the centerline length defining a third dimension.
  • the curvilinear planar representation may eliminate degrees of freedom (e.g., the curvature of the centerline), which may not be relevant for predicting one or more parameters of the coronary vessels.
  • the curvature of the centerline may be irrelevant for determining a parameter related to the location of the coronary vessels' lumen boundary.
  • step 307 may include training a statistical model that may map the CPR or sub-regions of the CPR to one or more distances from the centerline to boundary points of the lumen. For instance, one such exemplary distance may include the distance from the center of each CPR frame to the lumen boundary in a set of angular directions around the centerline. To model the uncertainty around each distance output, step 307 may include assuming a mixture of Gaussian (MoG) model. In one embodiment, the trained statistical model may predict a set of standard deviations and mixing coefficients for each distance output. The standard deviations and mixing coefficients may approximate the PDF by a mixture of normal distributions.
  • a statistical model may map the CPR or sub-regions of the CPR to one or more distances from the centerline to boundary points of the lumen. For instance, one such exemplary distance may include the distance from the center of each CPR frame to the lumen boundary in a set of angular directions around the centerline.
  • step 307 may include assuming a mixture of Gaussian (M
  • the objective function of the statistical model may be specified, in one embodiment, as the loss between each annotated distance from the centerline to the lumen and the conditional mean of the MoG model.
  • the objective function for the MoG model may be uniquely defined, and the condition mean of the MoG model may be comprised of a weighted mean.
  • the MoG may provide one or more means, and weights corresponding to each of the means may be determined based on mixing coefficients of the mixture model.
  • FIG. 3B is a block diagram of an exemplary method 310 for a testing phase that may provide a probabilistic segmentation of a patient's blood vessels, according to one embodiment.
  • step 311 may include receiving image data of a patient's coronary artery in an electronic storage medium (e.g., hard drive, network drive, cloud drive, mobile phone, tablet, database, etc.).
  • an electronic storage medium e.g., hard drive, network drive, cloud drive, mobile phone, tablet, database, etc.
  • step 313 may include transforming the received image data into a curvilinear planar representation (CPR).
  • CPR curvilinear planar representation
  • step 313 may include receiving a prediction of the centerline of the patient's coronary artery (e.g., from any centerline detection algorithm of the literature).
  • a set of planes may be extracted along the centerline (e.g., orthogonal to the centerline) to include a 3D volume (e.g., CPR) with the coordinate system frame of reference defining two dimensions and the centerline length defining the third dimension.
  • the transformation parameters e.g., translation, scale, rotation
  • step 315 may include using saved results of the training phase (of method 300 ) to predict distances from the center of each CPR frame to a lumen boundary in the angular directions around a centerline, specified during the training. Alternately or in addition, step 315 may include using saved results of the training phase to predict standard deviation(s) and mixing coefficient(s) for each predicted distance. As a further embodiment, step 315 may include computing a conditional mean and/or a conditional standard deviation of each estimated lumen boundary. In one embodiment, the conditional mean of an estimated lumen boundary may include the location of a landmark point on a lumen mesh. A conditional standard deviation may indicate the uncertainty of the location of the landmark point.
  • step 317 may include generating an anatomic model of the patient's imaged coronary artery.
  • the anatomic model may include a final lumen segmentation.
  • step 317 may include transforming the predicted landmark point(s) from the CPR representation back to the original 3D image space. The orientation and position of each frame along the centerline may be determined from the creation of a CPR (if stored during step 313 ).
  • the 3D points may be computed from the CPR, and any 3D surface reconstruction method (e.g., Poisson surface reconstruction) may be applied to this point cloud of landmark point(s) to construct the anatomic model or final lumen segmentation.
  • any 3D surface reconstruction method e.g., Poisson surface reconstruction
  • step 319 may include outputting the anatomic model/complete segmentation boundary of the vessels and the associated uncertainty values to an electronic storage medium (e.g., hard drive, network drive, cloud drive, mobile phone, tablet, database, etc.) and/or display.
  • an electronic storage medium e.g., hard drive, network drive, cloud drive, mobile phone, tablet, database, etc.
  • FIGS. 4A and 4B describe a probabilistic segmentation method which uses a learning system trained using local image descriptors. This type of probabilistic segmentation may take place without a regressor or image transformation.
  • the location of a boundary point may be uncertain, meaning image elements in the neighborhood of the boundary point may be viable alternative candidates.
  • a segmentation model may be trained to assign probabilities to the likelihood that an element of an image includes an object boundary point. The probabilities in the local neighborhood of the element may then be interpreted as the uncertainty or confidence of the lumen boundary point.
  • One such segmentation model (e.g., statistical method) may include a model trained to classify object boundary points in an image (e.g., in a 3D volume).
  • the classification of the object boundary points may be based on local image descriptors of the image.
  • local image descriptors may include local image intensities, Gaussian derivative image features, Gabor filter responses, learned features from pre-trained artificial neural network(s), etc.
  • FIG. 4A provides an exemplary training phase for this type of segmentation model
  • FIG. 4B describes an embodiment of testing this segmentation model.
  • FIG. 4A is a flowchart of an exemplary method 400 for a training phase designed to provide a probabilistic segmentation model based on local image descriptors, according to various embodiments.
  • step 401 may include receiving one or more images of an object of interest in an electronic storage medium (e.g., hard drive, network drive, cloud drive, mobile phone, tablet, database, etc.). These images may, for instance, be from a medical imaging device, e.g., CT, MR, SPECT, PET, ultrasound, (multi-view) angiography, etc.
  • the images may include images of a patient's anatomy. These images may be referred to as “training images.”
  • step 403 may include receiving, for each image element (e.g., voxel), a label.
  • the label may indicate the boundary of a structure of interest.
  • Step 405 may include defining the boundary of the structure of interest based on the label (e.g., by image voxels).
  • step 407 may include training a statistical model (e.g., a nearest neighbor classifier, a random forest classifier, etc.) to predict the probability that a voxel or a point at an image location (e.g., the center of an image sub-region) comprises a boundary point.
  • the probability may be determined based on a local image descriptor associated with the voxel or the point.
  • the local image descriptor may be provided by the label (e.g., of step 403 ).
  • one or more of the training images may include detected image-derived features.
  • Each of the image-derived features may have a corresponding label and point location indicator (e.g., a point may be located at the center of an image sub-region).
  • the label may indicate whether the point constitutes a boundary point of the structure of interest (e.g., the point constitutes image background or the point constitutes a boundary of the structure of interest).
  • the training phase may include determining and/or receiving the training examples.
  • the training phase may include training the statistical model to predict a label for a new image, based on an image descriptor (e.g., an image-derived feature).
  • the image descriptor may be associated with a location (e.g., a voxel) of the new image, and the label for the new image may include a likelihood (e.g., probability) that the location of the new image constitutes a boundary of the object of interest.
  • FIG. 4B is a block diagram of an exemplary method 410 for a testing phase that may provide a probabilistic segmentation based on local image descriptors, according to one embodiment.
  • step 411 may include receiving an image in an electronic storage medium (e.g., hard drive, network drive, cloud drive, mobile phone, tablet, database, etc.).
  • the images may include image may include a representation of a portion of a patient's anatomy.
  • step 413 may include predicting the probability that a location (within a local image region of the received image) comprises a boundary point (e.g., of the patient's anatomy). For example, step 413 may include predicting the probability that the location is a boundary point of the object, using the trained statistical model (of method 400 ). In one embodiment, step 413 may include defining a region around a likely boundary point to be a region of uncertainty. Alternately or in addition, step 413 may include applying a shape-fitting model to determine the object boundary. The application of the shape-fitting model may have the goal of covering the space around a segmentation boundary. For example, one scenario may include applying a shape-fitting model to cover the entire space around a segmentation boundary.
  • applying the shape-fitting model to determine the object boundary may be based on the boundary points in which the regions of uncertainty may lie along the normal directions of the object boundary.
  • uncertainties may be in the direction of the respective rays.
  • the spatial distribution of the probability map in the uncertainty region may be summarized as an uncertainty score. For instance, the standard deviation of the probability map may be computed within the region to determine an uncertainty score.
  • another statistical classifier could be trained to map uncertainty regions to an uncertainty score.
  • step 415 may include generating and outputting a complete segmentation boundary of the object and the associated uncertainty values to an electronic storage medium (e.g., hard drive, network drive, cloud drive, mobile phone, tablet, database, etc.) and/or display.
  • the object may include at least a portion of the patient's anatomy.
  • a segmentation may render various segmentation boundary locations of varying degrees of accuracy.
  • the present disclosure is directed to providing information on the statistical confidence (or statistical uncertainty) of a segmentation by predicting a probability distribution of segmentation boundary locations.
  • the probability distribution may include a probability density function.
  • One embodiment of predicting a probability density function may include two phases: a training phase and a testing phase.
  • An exemplary training phase may include recognizing patterns of statistical uncertainty from a collection of parameterizations or segmentations, and storing the recognized patterns.
  • An exemplary testing phase may include using the recognized patterns to predict a probability density function of an image segmentation of a new image.
  • the training phase may include training a statistical model or system to recognize the patterns of uncertainty in segmentations.
  • the testing phase may include applying the trained statistical model to a new image, segmentation, or object parameterization, to predict a probability density function or confidence value for the segmentation/parameterization.
  • the probability density function or confidence value may be used in improving the accuracy of segmentation(s), models made from segmentation(s), or analyses performed using such models.

Abstract

Systems and methods are disclosed for performing probabilistic segmentation in anatomical image analysis, using a computer system. One method includes receiving a plurality of images of an anatomical structure; receiving one or more geometric labels of the anatomical structure; generating a parametrized representation of the anatomical structure based on the one or more geometric labels and the received plurality of images; mapping a region of the parameterized representation to a geometric parameter of the anatomical structure; receiving an image of a patient's anatomy; and generating a probability distribution for a patient-specific segmentation boundary of the patient's anatomy, based on the mapping of the region of the parameterized representation of the anatomical structure to the geometric parameter of the anatomical structure.

Description

RELATED APPLICATION(S)
This application claims priority to U.S. Provisional Application No. 62/438,514 filed Dec. 23, 2016, the entire disclosure of which is hereby incorporated by reference in its entirety.
FIELD OF THE INVENTION
Various embodiments of the present disclosure relate generally to medical imaging and related methods. More specifically, particular embodiments of the present disclosure relate to systems and methods for performing probabilistic segmentation in anatomical image analysis.
BACKGROUND
One type of medical image analysis involves decomposing an image into meaningful regions, e.g., imaged anatomical structures versus image artifacts. This process is sometimes referred to as (medical) image segmentation. In some cases, the presence of image noise or other image artifacts may hinder accurate segmentation of anatomical structures of interest. A common approach for automating segmentation may include using a learning based system. A learning based system may be trained to predict a class label probability for each image element (e.g., pixel, voxel, or object parameter). For instance, a shape may be fitted, or a structured output model may be applied to assign a label to each image element based on its predicted label probability. A learning based system may also be trained to estimate, for an image, the locations of boundaries of different structures within the image. In some cases, these estimated boundaries may not align with the boundaries between discrete image elements. The final segmentation may reflect a likely segmentation for a given image.
However, many segmentation boundary locations may be plausible. Even human technicians may select or draw different boundary locations due to ambiguous image data. In addition, boundaries estimated by a trained system may be ambiguous due to a lack of appropriate training examples or due to a sub-optimally trained method. In this case, the trained system may not provide accurate object boundaries. A final segmentation often does not indicate whether the segmentation is a nearly definite accurate representation of a portion of anatomy, or whether an alternative segmentation is almost equally likely to render an accurate representation of anatomy. Further, the final segmentation may not indicate region(s) where the final segmentation boundary was more certain or less certain. Accordingly, a desire exists to understand the statistical confidence (or statistical uncertainty) of a provided segmentation. A desire also exists for generating alternative or aggregated solutions segmentations.
The present disclosure is directed to overcoming one or more of the above-mentioned problems or interests.
SUMMARY
According to certain aspects of the present disclosure, systems and methods are disclosed for performing probabilistic segmentation in image analysis. One method of performing probabilistic segmentation in anatomical image analysis includes: receiving a plurality of images of an anatomical structure; receiving one or more geometric labels of the anatomical structure; generating a parametrized representation of the anatomical structure based on the one or more geometric labels and the received plurality of images; mapping a region of the parameterized representation to a geometric parameter of the anatomical structure; receiving an image of a patient's anatomy; and generating a probability distribution for a patient-specific segmentation boundary of the patient's anatomy, based on the mapping of the region of the parameterized representation of the anatomical structure to the geometric parameter of the anatomical structure. In accordance with another embodiment, a system for performing probabilistic segmentation in anatomical image analysis comprises: a data storage device storing instructions for performing probabilistic segmentation in anatomical image analysis; and a processor configured for: receiving a plurality of images of an anatomical structure; receiving one or more geometric labels of the anatomical structure; generating a parametrized representation of the anatomical structure based on the one or more geometric labels and the received plurality of images; mapping a region of the parameterized representation to a geometric parameter of the anatomical structure; receiving an image of a patient's anatomy; and generating a probability distribution for a patient-specific segmentation boundary of the patient's anatomy, based on the mapping of the region of the parameterized representation of the anatomical structure to the geometric parameter of the anatomical structure.
In accordance with yet another embodiment, a non-transitory computer readable medium for use on a computer system containing computer-executable programming instructions for performing a method of probabilistic segmentation in anatomical image analysis is provided. The method includes: receiving a plurality of images of an anatomical structure; receiving one or more geometric labels of the anatomical structure; generating a parametrized representation of the anatomical structure based on the one or more geometric labels and the received plurality of images; mapping a region of the parameterized representation to a geometric parameter of the anatomical structure; receiving an image of a patient's anatomy; and generating a probability distribution for a patient-specific segmentation boundary of the patient's anatomy, based on the mapping of the region of the parameterized representation of the anatomical structure to the geometric parameter of the anatomical structure.
Additional objects and advantages of the disclosed embodiments will be set forth in part in the description that follows, and in part will be apparent from the description, or may be learned by practice of the disclosed embodiments. The objects and advantages of the disclosed embodiments will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosed embodiments, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate various exemplary embodiments and together with the description, serve to explain the principles of the disclosed embodiments.
FIG. 1 is a block diagram of an exemplary system and network for performing probabilistic segmentation in image analysis, according to an exemplary embodiment of the present disclosure.
FIGS. 2A and 2B are flowcharts of an exemplary method for performing probabilistic segmentation using parameterized object boundaries, according to an exemplary embodiment of the present disclosure.
FIGS. 3A and 3B are flowcharts of an exemplary embodiment of the method of FIGS. 2A and 2B, as applied to probabilistic segmentation of blood vessels, according to an exemplary embodiment of the present disclosure.
FIGS. 4A and 4B are flowcharts of an exemplary embodiment of the method for performing probabilistic segmentation using local image descriptors, according to an exemplary embodiment of the present disclosure.
DESCRIPTION OF THE EMBODIMENTS
Reference will now be made in detail to the exemplary embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
As described above, medical image segmentation may involve rendering various segmentation boundary locations. Often, a final segmentation includes a likely segmentation or boundary for an object of interest (e.g., an anatomical structure) represented in a given image. However, many other segmentation boundary locations may be plausible as well. While class label probability for image elements used in segmentation may be post-processed, a desire exists to understand the confidence or uncertainty of a final segmentation. Further, a desire exists to provide alternative solution(s) to the final segmentation.
The present disclosure is directed to providing information on the statistical confidence or statistical uncertainty of a segmentation by predicting a probability distribution of segmentation boundary locations. In one embodiment, the present disclosure may include both a training phase and a testing (or usage or production) phase to estimate a segmentation boundary and/or probability density function for the segmentation boundary. For example, the disclosed systems and methods may be applied to segmenting anatomy in received image(s) of a patient of interest and determining the probability density function for the boundary of the segmented anatomy. In one embodiment, the training phase may include developing a set of probability density functions for at least one parameter of an object parameterization associated with received images. For example, the training phase may involve receiving a collection of images, receiving or inputting information of an anatomical part or portion shown in each of the images (e.g., a localized anatomy for each of the images), defining a parameterization the anatomical part or portion, and generating a probability density function for each parameter of the parameterization, based on the received information. An output from the training phase may include a probability density function for each parameter of an anatomical or medical model.
In one embodiment, a testing phase may include receiving images of a patient's anatomy. The patient may be a patient of interest, e.g., a patient desiring a diagnostic test. The testing phase may involve completing a segmentation (e.g., rendering a segmentation boundary or surface reconstruction) of the patient's anatomy based on the received images and using the stored training phase probability density function(s) to compute a patient-specific probability density function for a parameter of the completed segmentation. As used herein, the term “exemplary” is used in the sense of “example,” rather than “ideal.” Although this exemplary embodiment is written in the context of medical image analysis, the present disclosure may equally apply to any non-medical image analysis or computer vision evaluation.
Referring now to the figures, FIG. 1 depicts a block diagram of an exemplary environment of a system and network for performing probabilistic segmentation. Specifically, FIG. 1 depicts a plurality of physicians 102 and third party providers 104, any of whom may be connected to an electronic network 100, such as the Internet, through one or more computers, servers, and/or handheld mobile devices. Physicians 102 and/or third party providers 104 may create or otherwise obtain images of one or more patients' cardiac, vascular, and/or organ systems. The physicians 102 and/or third party providers 104 may also obtain any combination of patient-specific information, such as age, medical history, blood pressure, blood viscosity, etc. Physicians 102 and/or third party providers 104 may transmit the cardiac/vascular/organ images and/or patient-specific information to server systems 106 over the electronic network 100. Server systems 106 may include storage devices for storing images and data received from physicians 102 and/or third party providers 104. Server systems 106 may also include processing devices for processing images and data stored in the storage devices. Alternatively or in addition, the probabilistic segmentation of the present disclosure (or portions of the system and methods of the present disclosure) may be performed on a local processing device (e.g., a laptop), absent an external server or network.
FIGS. 2A and 2B describe one embodiment for performing probabilistic segmentation using a learning system trained with parameterized object boundaries. FIGS. 3A and 3B are directed to specific embodiments or applications of the methods discussed in FIGS. 2A and 2B. For example, FIG. 3A and FIG. 3B describe an embodiment for the probabilistic segmentation of blood vessels using a learning system trained on parameterized object boundaries. FIGS. 4A and 4B describe another probabilistic segmentation method which employs a learning system trained using local image descriptors. In contrast to the methods of FIGS. 2A and 2B, the methods of FIGS. 4A and 4B permit segmenting an image without a regressor or simplifying transformation. All of the methods may be performed by server systems 106, based on information, images, and data received from physicians 102 and/or third party providers 104 over electronic network 100.
FIGS. 2A and 2B describe an embodiment for predicting and testing a probability density function. Understanding a probability density function in association with a segmentation may be helpful for understanding whether the resulting segmentation is reliable, or whether portion(s) of the resulting segmentation are possibly inaccurate. Since analyses may be performed based on the segmentation, the present disclosure may provide an understanding of where or how the analyses could be inaccurate due to the segmentation. For example, accuracy of segmentation of a large vessel may impact blood flow analyses, whereas accuracy of segmentation in a small downstream vessel may have little influence on the accuracy of a predicted blood flow value. A probability density function could help determine whether to use a segmentation for an analysis, or which part of a segmentation could be used reliably for an analysis.
In one embodiment, probabilistic segmentation may include two phases: a training phase and a testing phase. The training phase may involve training a learning system (e.g., a deep learning system) to predict a probability density function (PDF) for a parameter of a segmentation/object parameterization. The testing phase may include predicting a probability density function for a parameter of an object parameterization of a newly received image.
FIG. 2A is a flowchart of an exemplary training phase method 200 for training a learning system (e.g., a deep learning system) to predict a probability density function, according to various embodiments. Method 200 may provide the basis for a testing phase or production phase in method 210 of FIG. 2B, for the probabilistic segmentation of an imaged object of interest of a specific patient. In one embodiment, step 201 of method 200 may include receiving one or more images in an electronic storage medium (e.g., hard drive, network drive, cloud drive, mobile phone, tablet, database, etc.). In a medical context, these images may, for instance, be from a medical imaging device, e.g., computed tomography (CT), positron emission tomography (PET), single-photon emission computerized tomography (SPECT), magnetic resonance imaging (MRI), microscope, ultrasound, (multi-view) angiography, etc. In one embodiment, multiple images for a single patient may be used. In a further embodiment, the images may comprise a structure of a patient's anatomy. In other embodiments, the images may be of numerous individuals having similar anatomical features or numerous individuals having different anatomical features. In a non-medical context, these images may be from any source, e.g., a camera, satellite, radar, lidar, sonar, telescope, microscope, etc. In the following disclosure, images received in step 201 may be referred to as, “training images.”
In one embodiment, step 203 may include receiving an annotation for one or more structures of interest shown in one or more of the training images. In one embodiment, all training images may be annotated. This type of embodiment may be referred to as, “supervised learning.” Another embodiment may include only a subset of the training images with annotations. This type of scenario may be referred to as, “semi-supervised learning.” In one embodiment, the structures of interest may include a blood vessel or tissue of the patient. In such a case, annotation(s) may include labels for vessel names (e.g., right coronary artery (RCA), left anterior descending artery (LAD), left circumflex artery (LCX), etc.), vessel landmarks (e.g., aortic valve point, ostia points, bifurcations, etc.), estimated vessel location, flags (e.g., noted portions where imaging is ambiguous or boundaries or unclear), etc.
In one embodiment, step 205 may include defining a parameterization for an object boundary of the structure of interest. For example, step 205 may include defining whether to generate an implicit or an explicit surface representation as an object boundary of the structure of interest. Defining the object boundary of the structure of interest may include subdividing the structure of interest into one or more portions and estimating the boundary of each of the portions. Alternately, the structure of interest may be subdivided into one or more segments, and the boundary of each of the segments may be estimated. Estimating the boundary may include estimating the location of the structure (or a surface representation of the structure), for example, by estimating the probability that a certain image region or voxel includes a representation of the structure.
One such scenario may involve receiving an image of a group of people. A person's face may be defined as the structure of interest and step 205 may include defining a desired parameterization for the person's face. For example, the desired parameterization may include a 3D surface model of the person's face. Step 205 may include generating an object boundary of the person's face as a 3D surface model. For instance, step 205 may include defining a point in the image and estimating a probability that the point is part of the portion of the image showing the face. Once probability estimates are computed for multiple points, portions of the image showing the person's face may be distinguished from portions of the image that do not show the person's face. In one embodiment, extracting a parameterization of an object of interest comprising a face may include receiving annotations for locations of facial landmarks (e.g., eyes, the point of the nose, the corners of the mouth, etc.). Each of the landmarks K may be represented by a 2D position in a given image. The 2D positions of each of the landmarks may be concatenated in a single K*2 dimensions vector, where the vector may be used as a parameterization of the face. A 3D surface model of the person's face may be constructed from the generated probabilities and the image. In one embodiment, the 3D surface model of the person's face may constitute an object boundary.
As another scenario, step 205 may include defining a parameterization for an object boundary of a left ventricle myocardium. In such a case of myocardium segmentation, step 205 may include receiving a series of labeled cardiac computed tomography angiography (CTA) images, where each image voxel may be labeled as being inside or outside the left ventricle myocardium. Step 205 may then further include deforming the received images into a common space with elastic image registration and recording the transformation for each image (Ti). Then, a set of K points may be distributed over the surface of the annotated myocardium in the common space, and the inverse transformation (Ti −1) may be used to deform the set points back to the original image space. Lastly, the K 3D points may be concatenated into a K*3 dimensional vector and used as a parameterization of the left ventricle myocardium.
In a case involving lumen segmentation, step 205 may include defining a parameterization for an object boundary of a vessel lumen. In such a scenario, step 205 may include receiving a surface model for the vessel lumen and a set of paths (e.g., centerlines) defining the approximate location of the center of the vessel. Then, for each cross-sectional image, step 205 may include defining a set of K rays extending from the vessel centerline, where the K rays comprise equal angularly distributed rays in the plane cross-sectional to the centerline. These rays may be intersected with the surface model of the lumen and the distance from the centerline to the intersection point may be recorded. These K distances may be concatenated in a K dimensional vector and used as the parameterization of a cross-sectional vessel shape.
Alternately or in addition, the parameterization of the object boundary may include estimating the location of an object other than the structure of interest or estimating the object boundary using a parameter that is not on the boundary or surface of the structure of interest. For example, if a blood vessel is the structure of interest, parameterization of the object boundary using an indication of the object boundary may include estimating a centerline location. As a more general example, the parameterization may be performed by estimating the location of medial axes. Medial axes may include a set of points of the structure of interest that have multiple closest points on the structure's boundary. For example, a medial axis may connect several points that are equidistant to more than one point on the structure's boundary. Rather than existing on the surface of the structure (e.g., at the structure's boundary), medial axes may exist inside the boundaries/surfaces of the structure. In other words, the medial axes may form a skeleton for the structure of interest. Parameterization using a medial axes may also include finding, for at least one point on the medial axes, a distance from the point to the boundary of the structure of interest. Overall, step 205 may include defining a desired type of parameterization for the object boundary of a structure of interest, as well as estimating (or parameterizing) the object boundary of the structure of interest.
In one embodiment, step 207 may include training a model to predict a probability density function (PDF) for the object boundary parameterization of the training data (e.g., images and annotations of steps 201 and 203). For example, the training may predict a PDF of a joint distribution of the object boundary parametrization, a PDF for each parameter of the object boundary parameterization, etc. A variety of machine learning techniques may be used to train the model.
For example, training the model may include approximating the PDF of a parameter of the object boundary parametrization by using a mixture model (e.g., using a mixture density model). For instance, an object boundary may be assumed as represented by a set of landmark points. The landmark points may indicate a point estimate for a distance from a given input location (e.g., the center of an input patch) to the object boundary in a given direction. In one embodiment, the input patch may be comprised of a 2D frame of a curvilinear planar representation (e.g., a 2D rectangle extracted roughly orthogonal to the direction of a vessel centerline). Step 207 may include training a model to predict a point estimate/value for the distance, as well as determining a statistical value indicating the certainty of the predicted distance value.
For example, determining the statistical value may include modeling a statistical uncertainty in the distance value using a mixture of Gaussians (MoG). In one scenario, step 207 may include training a model to learn to predict weighting coefficients, mean values, and standard deviations to describe the MoG. The MoG may be combined with a neural network structure in the form of a mixture density network (MDN). In one embodiment, the combined MoG and neural network structure may be optimized with a backpropagation algorithm using stochastic gradient descent. The optimization may include optimizing toward any of several local minima (rather than a global optimum). The local minimum that is reached (out of the several local minima) may be dictated by many factors, e.g., the neural network structure, the parameter initialization, the learning schedule, the type of optimization, etc. Any machine learning technique that is able to predict a multi-variate output (e.g. variants of deep learning, support vector machine, random forests, etc.) could be applied in this optimization step. In the case of a MoG or MDN, the conditional mean (e.g., expected or ideal value) may be optimized to approximate the distance from the input patch to the object boundary.
As another example, step 207 of training a model to predict a PDF may include approximating a PDF by modeling a distribution for latent variables and sampling multiple predictions from the modeled distribution. For example, training the model may include approximating PDF variational models (e.g., semi-supervised variational autoencoders). These PDF variational models may predict object boundary parameters, based on randomly sampled input parameters. The PDF variational models may also be conditioned on image feature values. In one scenario, the spread of these predictions may be summarized by a standard deviation. In one embodiment, such a standard deviation may represent the level of uncertainty of one or more segmentations of the object boundary. Accordingly, step 207 may include training a model to predict a PDF by computing standard deviations as described above. Step 209 may include saving the results of the model, including the predicted PDF of each parameter of the object parameterization, e.g., to a physical data storage device, cloud storage device, etc.
FIG. 2B is a block diagram of an exemplary testing phase (or usage or “production phase”) method 210 for predicting the certainty of a segmentation for a specific patient image, according to an exemplary embodiment of the present disclosure. In one embodiment, the uncertainty of the segmentation may be predicted using a trained model (e.g., from method 200). The uncertainty may be conveyed by a probability density function.
In one embodiment, step 211 may include receiving one or more medical images in an electronic storage medium (e.g., hard drive, network drive, cloud drive, mobile phone, tablet, database, etc.). In one embodiment, the images may include medical images, e.g., images may be from any medical imaging device, e.g., CT, MR, SPECT, PET, microscope, ultrasound, (multi-view) angiography, etc. In one embodiment, training images (e.g., of method 200) may include images acquired from one patient, and step 211 may include receiving images also of that one patient. Alternately or in addition, step 211 may include receiving one or more images from a non-medical imaging device, e.g., a camera, satellite, radar, lidar, sonar, telescope, microscope, etc. In the following steps, images received during step 211 may be referred to as “testing images” or “production images.”
In one embodiment, step 213 may include initializing the trained model (of method 200) for use on the patient-specific image of step 211. For example, the trained model may predict a PDF for a parameter of an object parameterization of a structure of interest. Step 213 may include initializing the trained model by providing an object parameterization for the trained model's PDF prediction. For example, step 213 may include receiving an estimate of the segmentation boundary or an estimate of the location or boundary of another object. In other words, as a preliminary step to reduce the search space of plausible segmentations, a known segmentation algorithm may be applied to (patient-specific) image(s) of step 211. For example, an object boundary of a structure of interest in the patient-specific image(s) may be approximated by its medial axis (e.g., a centerline), using a centerline extraction algorithm from the literature.
In one embodiment, step 215 may include using the trained model to predict a PDF for a parameter of the object parameterization. For instance, if an MDN is used, the predictions may describe the parameters of a mixture of Gaussians which may approximate the PDF. For example, an MDN may be optimized during training (e.g., method 200) to match the training data. Then, during testing, the trained model may predict the MDN that best matches the received patient-specific image(s). In one embodiment, step 215 may include using the trained model to predict the PDF for each parameter of the object parameterization.
In one embodiment, step 217 may include selecting a representative value from the PDF for a parameter of the object parameterization. This step may be performed with any number of standard methods, e.g., maximum a posteriori, maximum likelihood, etc. Alternately or in addition, step 217 may include resolving a complete segmentation boundary based on the PDF. In one embodiment, resolving a complete segmentation boundary may be performed using surface reconstruction. For instance, a 3D surface reconstruction algorithm (e.g., Poisson surface reconstruction) may be used to reconstruct a surface from a cloud of predicted segmentation boundary locations. Specific variants of a 3D surface reconstruction algorithm can be designed that fully utilize the PDF without first selecting a representative value from the PDF. For example, one embodiment could include using an algorithm similar to a mean-shift mode finding algorithm comprising an iterative process of reconstructing the boundary and refining the PDF. Alternately or in addition, an initial representative value may be selected, surface reconstruction may be performed with a known reconstruction technique, and the PDF may be reweighted based on how closely the PDF resembles the reconstructed surface. This process may be repeated until convergence. The advantage of this process is that locally incorrect estimates of the PDF may be corrected by the global nature of the surface reconstruction technique. For example, given a PDF indicating locally that the distance from the centerline to the lumen boundary has a chance of 40% to be 2 mm and 60% to be 3 mm, but all surrounding distances are predicted to be 3 mm with 100% certainty, a current approach may result in a prediction of 2 mm for the single (local) point and 3 mm for the surrounding portions of the lumen. In the approach outlined above including iterative reweighting, however, 2 mm may be the prediction for the local point in an early iteration, but calculations may quickly converge to a prediction of 3 mm because of the neighboring distances. If the structure of interest exhibits a certain (e.g., known) structure or geometry, this structure and geometry may serve as prior knowledge to constrain the space of possible reconstructions. For instance, tubular structures may be represented by a set of ellipse landmarks along a centerline.
In one embodiment, step 217 may include outputting one or both of the complete segmentation boundary and/or the predicted PDF for one or more parameters of the object parameterization. The output may be stored in an electronic storage medium (e.g., hard drive, network drive, cloud drive, mobile phone, tablet, database, etc.) and/or displayed (e.g., via an interface, screen, projection, presentation, report, dashboard, etc.).
Computed PDFs may have several applications or uses. For example, a PDF may be used to compute a confidence score (e.g., by computing a conditional standard deviation). In one embodiment, step 217 may include outputting the confidence score to an electronic display and/or an electronic storage device (hard drive, network drive, cloud storage, portable disk, etc.). The confidence score may be used in multiple ways. In one embodiment, the confidence score or samples from the PDF may be used to guide a human expert or computational algorithm to focus on uncertain or ambiguous image/model/segmentation regions that may be analyzed more exhaustively. For example, regions of a segmentation boundary (e.g., object boundary of a structure of interest) that have confidence scores below a predetermined threshold score may be highlighted in a visual display.
In one scenario, regions of a segmentation boundary that have confidence scores below a predetermined threshold score may be prioritized in a sequence of displays prompting user validation of the segmentation. Prioritization of the display of the regions may include displaying the low-confidence regions for validation, prior to displaying regions with higher confidence scores. Higher confidence scores may be defined as confidence scores exceeding the predetermined threshold score. For one case, a model validation workflow may display regions of a segmentation boundary based on the lowest to the highest confidence scores. For example, the workflow may prompt a technician to examine or validate regions of a segmentation boundary that have the lowest confidence scores, prior to reviewing regions of the boundary with higher confidence scores. One exemplary workflow may alert the technician only to regions of the segmentation boundary that have confidence scores below a predetermined threshold score. Alternately or in addition, the display may highlight portions with high confidence scores and offer only these portions as segments of the model that may be used for diagnostic evaluations.
In one embodiment, computed confidence score or samples from the PDF may also be used as a probabilistic input for determining physiological/biomechanical properties at one or more locations in a vascular or organ/muscle model (e.g., a blood flow characteristic including flow, velocity, pressure, fractional flow reserve (FFR), instantaneous fractional flow reserve (iFFR), axial stress, wall shear stress, strain, force, shape, size, volume, tortuosity, etc.). These physiological or biomechanical properties may be determined via biophysical simulation, machine learning, association with a database, etc. For example, the uncertainty of lumen boundary points may be used to model variations of the lumen geometry. The modeled variations of lumen geometry may be utilized by a machine-learning driven FFR method. For instance, if a vessel size is uncertain at a particular location, the PDF may provide capabilities to compute an uncertainty of FFR (axial stress, iFFR, etc.) computed based on the uncertain vessel size. In this way, the PDF may provide an understanding of confidence interval or probability density functions of physiological/biomechanical properties computed based on a segmentation boundary (with a predicted PDF). The probabilistic segmentation model and the biophysiological model could be trained in sequence or simultaneously in one end-to-end approach.
The confidence score may further be used as an input for a segmentation algorithm for weighting the training data. For example, training may include sampling certain training samples (e.g., training segmentations with a high degree of accuracy for the structure of interest) at a different frequency or at different iterations than uncertain training examples (e.g., training segmentations with a low degree of accuracy for the structure of interest). The confidence score may help determine which sets of training data should have more impact on the training of the probability density function model (e.g., of method 200).
The confidence score may also be used to quantify the quality of a reconstruction algorithm. Quantifying the quality of a reconstruction algorithm may provide indication of a preferred segmentation. For instance, a preferred reconstruction algorithm may include a reconstruction algorithm that produces an image which leads to a more certain segmentation over image(s) produced by other reconstruction algorithms. Similarly, for various models, the confidence score may be used to select among different parameterized models. A model (e.g., a complete segmentation boundary) with higher confidence in a segmentation may be preferred. In one embodiment, multiple different models may also be presented to a user for review and/or selection, possibly with the guidance of the confidence score information. Lastly, the PDF or confidence score may be used to build an integrated model/integrating multiple images to produce a resultant model/image, reconstruction, or enhancement that meets or exceeds a given, target PDF or confidence score.
FIGS. 3A and 3B are directed to specific embodiments or applications of the exemplary methods discussed in FIGS. 2A and 2B. For example, FIG. 3A and FIG. 3B, respectively, describe an exemplary training phase and testing phase for performing probabilistic segmentation of blood vessels. The accuracy of patient-specific segmentation of blood vessels may impact several medical assessments, including blood flow simulations, calculations of geometric characteristics of blood vessels, plaque rupture predictions, perfusion predictions, etc. While a single segmentation output of a blood vessel may be provided by automated algorithms, several uncertainties may exist about the precise location of a segmentation boundary. Even two human experts may favor different annotations of a vessel boundary due to ambiguous image content. In one embodiment, method 300 may model the uncertainty (or the confidence) in a generated vessel segmentation boundary by modeling a probability density function (PDF) around the vessel segmentation boundary. Such a probabilistic segmentation model could be applied to model the uncertainty of plaque, tissue, organ or bone segmentation, where the segmentation model may capture multiple plausible segmentation realizations or summarize the variability of the model.
FIG. 3A is a flowchart of an exemplary method 300 for a training phase designed to provide the basis for performing probabilistic segmentation of a patient's blood vessels, according to various embodiments. In one embodiment, step 301 may include receiving one or more images of coronary arteries in an electronic storage medium (e.g., hard drive, network drive, cloud drive, mobile phone, tablet, database, etc.). These images may, for instance, be from a medical imaging device, such as CT, MR, SPECT, PET, ultrasound, (multi-view) angiography, etc. These images may be referred to as “training images.”
In one embodiment, step 303 may include receiving annotations of the vessel lumen boundary and the vessel lumen centerline(s) of each of the training images. For example, step 303 may include receiving or generating a geometric mesh of the coronary vessels represented in the received images. In one embodiment, the geometric mesh may be specified as a set of vertices and edges. Alternately or in addition, step 303 may include receiving a centerline of the coronary vessels. The centerline may also be represented as a set of vertices that may be connected by edges.
In one embodiment, step 305 may include transforming the training image data (e.g., the geometric mesh, vertices, edges, centerline, etc.) into a curvilinear planar representation (CPR). For example, a set of planes (e.g., frames) may be extracted along the centerline (e.g., orthogonal to the centerline) to constitute a 3D volume. In one embodiment, the 3D volume may comprise a curvilinear planar representation (CPR), with a coordinate system frame of reference defining two dimensions and the centerline length defining a third dimension. In one embodiment, the curvilinear planar representation may eliminate degrees of freedom (e.g., the curvature of the centerline), which may not be relevant for predicting one or more parameters of the coronary vessels. For example, the curvature of the centerline may be irrelevant for determining a parameter related to the location of the coronary vessels' lumen boundary.
In one embodiment, step 307 may include training a statistical model that may map the CPR or sub-regions of the CPR to one or more distances from the centerline to boundary points of the lumen. For instance, one such exemplary distance may include the distance from the center of each CPR frame to the lumen boundary in a set of angular directions around the centerline. To model the uncertainty around each distance output, step 307 may include assuming a mixture of Gaussian (MoG) model. In one embodiment, the trained statistical model may predict a set of standard deviations and mixing coefficients for each distance output. The standard deviations and mixing coefficients may approximate the PDF by a mixture of normal distributions. The objective function of the statistical model may be specified, in one embodiment, as the loss between each annotated distance from the centerline to the lumen and the conditional mean of the MoG model. In one embodiment, the objective function for the MoG model may be uniquely defined, and the condition mean of the MoG model may be comprised of a weighted mean. For example, the MoG may provide one or more means, and weights corresponding to each of the means may be determined based on mixing coefficients of the mixture model.
FIG. 3B is a block diagram of an exemplary method 310 for a testing phase that may provide a probabilistic segmentation of a patient's blood vessels, according to one embodiment. In one embodiment, step 311 may include receiving image data of a patient's coronary artery in an electronic storage medium (e.g., hard drive, network drive, cloud drive, mobile phone, tablet, database, etc.).
In one embodiment, step 313 may include transforming the received image data into a curvilinear planar representation (CPR). For instance, step 313 may include receiving a prediction of the centerline of the patient's coronary artery (e.g., from any centerline detection algorithm of the literature). A set of planes may be extracted along the centerline (e.g., orthogonal to the centerline) to include a 3D volume (e.g., CPR) with the coordinate system frame of reference defining two dimensions and the centerline length defining the third dimension. The transformation parameters (e.g., translation, scale, rotation) may be stored.
In one embodiment, step 315 may include using saved results of the training phase (of method 300) to predict distances from the center of each CPR frame to a lumen boundary in the angular directions around a centerline, specified during the training. Alternately or in addition, step 315 may include using saved results of the training phase to predict standard deviation(s) and mixing coefficient(s) for each predicted distance. As a further embodiment, step 315 may include computing a conditional mean and/or a conditional standard deviation of each estimated lumen boundary. In one embodiment, the conditional mean of an estimated lumen boundary may include the location of a landmark point on a lumen mesh. A conditional standard deviation may indicate the uncertainty of the location of the landmark point.
In one embodiment, step 317 may include generating an anatomic model of the patient's imaged coronary artery. The anatomic model may include a final lumen segmentation. For example, step 317 may include transforming the predicted landmark point(s) from the CPR representation back to the original 3D image space. The orientation and position of each frame along the centerline may be determined from the creation of a CPR (if stored during step 313). In one embodiment, the 3D points may be computed from the CPR, and any 3D surface reconstruction method (e.g., Poisson surface reconstruction) may be applied to this point cloud of landmark point(s) to construct the anatomic model or final lumen segmentation.
In one embodiment, step 319 may include outputting the anatomic model/complete segmentation boundary of the vessels and the associated uncertainty values to an electronic storage medium (e.g., hard drive, network drive, cloud drive, mobile phone, tablet, database, etc.) and/or display.
FIGS. 4A and 4B describe a probabilistic segmentation method which uses a learning system trained using local image descriptors. This type of probabilistic segmentation may take place without a regressor or image transformation. In one embodiment, the location of a boundary point may be uncertain, meaning image elements in the neighborhood of the boundary point may be viable alternative candidates. In one embodiment, a segmentation model may be trained to assign probabilities to the likelihood that an element of an image includes an object boundary point. The probabilities in the local neighborhood of the element may then be interpreted as the uncertainty or confidence of the lumen boundary point. One such segmentation model (e.g., statistical method) may include a model trained to classify object boundary points in an image (e.g., in a 3D volume). The classification of the object boundary points may be based on local image descriptors of the image. Examples of local image descriptors may include local image intensities, Gaussian derivative image features, Gabor filter responses, learned features from pre-trained artificial neural network(s), etc.
FIG. 4A provides an exemplary training phase for this type of segmentation model, and FIG. 4B describes an embodiment of testing this segmentation model. FIG. 4A is a flowchart of an exemplary method 400 for a training phase designed to provide a probabilistic segmentation model based on local image descriptors, according to various embodiments. In one embodiment, step 401 may include receiving one or more images of an object of interest in an electronic storage medium (e.g., hard drive, network drive, cloud drive, mobile phone, tablet, database, etc.). These images may, for instance, be from a medical imaging device, e.g., CT, MR, SPECT, PET, ultrasound, (multi-view) angiography, etc. The images may include images of a patient's anatomy. These images may be referred to as “training images.”
In one embodiment, step 403 may include receiving, for each image element (e.g., voxel), a label. The label may indicate the boundary of a structure of interest. Step 405 may include defining the boundary of the structure of interest based on the label (e.g., by image voxels).
In one embodiment, step 407 may include training a statistical model (e.g., a nearest neighbor classifier, a random forest classifier, etc.) to predict the probability that a voxel or a point at an image location (e.g., the center of an image sub-region) comprises a boundary point. The probability may be determined based on a local image descriptor associated with the voxel or the point. The local image descriptor may be provided by the label (e.g., of step 403). In other words, one or more of the training images may include detected image-derived features. Each of the image-derived features may have a corresponding label and point location indicator (e.g., a point may be located at the center of an image sub-region). The label may indicate whether the point constitutes a boundary point of the structure of interest (e.g., the point constitutes image background or the point constitutes a boundary of the structure of interest).
Each of these associations or pairings between image-derived features and labels may constitute “training examples” produced from the training images. The training phase may include determining and/or receiving the training examples. The training phase may include training the statistical model to predict a label for a new image, based on an image descriptor (e.g., an image-derived feature). The image descriptor may be associated with a location (e.g., a voxel) of the new image, and the label for the new image may include a likelihood (e.g., probability) that the location of the new image constitutes a boundary of the object of interest.
FIG. 4B is a block diagram of an exemplary method 410 for a testing phase that may provide a probabilistic segmentation based on local image descriptors, according to one embodiment. In one embodiment, step 411 may include receiving an image in an electronic storage medium (e.g., hard drive, network drive, cloud drive, mobile phone, tablet, database, etc.). The images may include image may include a representation of a portion of a patient's anatomy.
In one embodiment, step 413 may include predicting the probability that a location (within a local image region of the received image) comprises a boundary point (e.g., of the patient's anatomy). For example, step 413 may include predicting the probability that the location is a boundary point of the object, using the trained statistical model (of method 400). In one embodiment, step 413 may include defining a region around a likely boundary point to be a region of uncertainty. Alternately or in addition, step 413 may include applying a shape-fitting model to determine the object boundary. The application of the shape-fitting model may have the goal of covering the space around a segmentation boundary. For example, one scenario may include applying a shape-fitting model to cover the entire space around a segmentation boundary. In one embodiment, applying the shape-fitting model to determine the object boundary may be based on the boundary points in which the regions of uncertainty may lie along the normal directions of the object boundary. In the case of a medial axis representation (e.g., a surface described by a curve, a set of perpendicular rays, and distances along the ray), uncertainties may be in the direction of the respective rays. The spatial distribution of the probability map in the uncertainty region may be summarized as an uncertainty score. For instance, the standard deviation of the probability map may be computed within the region to determine an uncertainty score. Alternatively, another statistical classifier could be trained to map uncertainty regions to an uncertainty score.
In one embodiment, step 415 may include generating and outputting a complete segmentation boundary of the object and the associated uncertainty values to an electronic storage medium (e.g., hard drive, network drive, cloud drive, mobile phone, tablet, database, etc.) and/or display. The object may include at least a portion of the patient's anatomy.
Thus, a segmentation may render various segmentation boundary locations of varying degrees of accuracy. The present disclosure is directed to providing information on the statistical confidence (or statistical uncertainty) of a segmentation by predicting a probability distribution of segmentation boundary locations. The probability distribution may include a probability density function. One embodiment of predicting a probability density function may include two phases: a training phase and a testing phase. An exemplary training phase may include recognizing patterns of statistical uncertainty from a collection of parameterizations or segmentations, and storing the recognized patterns. An exemplary testing phase may include using the recognized patterns to predict a probability density function of an image segmentation of a new image. In one embodiment, the training phase may include training a statistical model or system to recognize the patterns of uncertainty in segmentations. The testing phase may include applying the trained statistical model to a new image, segmentation, or object parameterization, to predict a probability density function or confidence value for the segmentation/parameterization. The probability density function or confidence value may be used in improving the accuracy of segmentation(s), models made from segmentation(s), or analyses performed using such models.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Claims (18)

What is claimed is:
1. A computer-implemented method of performing probabilistic segmentation in anatomical image analysis, the method comprising:
receiving a plurality of images of an anatomical structure;
receiving one or more geometric labels of the anatomical structure;
generating a parametrized representation of the anatomical structure based on the one or more geometric labels and the received plurality of images;
determining a distance from a point on a centerline of the anatomical structure to a point on a lumen boundary of the anatomical structure;
mapping a region of the parameterized representation to the determined distance;
receiving an image of a patient's anatomy; and
generating a probability distribution for a patient-specific segmentation boundary of the patient's anatomy, based on the mapping of the region of the parameterized representation to the determined distance.
2. The method of claim 1, further comprising:
generating a statistical confidence value based on the probability distribution.
3. The method of claim 1, further comprising:
generating the patient-specific segmentation boundary of the patient's anatomy, based on the probability distribution.
4. The method of claim 1, wherein the anatomical structure comprises a blood vessel and the patient's anatomy comprises a blood vessel of the patient's vasculature.
5. The method of claim 1, wherein the geometric labels include annotations of a vessel lumen boundary, centerline, surface, or a combination thereof.
6. The method of claim 1, further comprising:
generating a statistical confidence score of the determined distance.
7. The method of claim 1, further comprising:
generating the patient-specific segmentation boundary based on the determined distance.
8. The method of claim 1, further comprising:
generating, for each image of the plurality of images, a parameterized representation of the anatomical structure, wherein the parameterized representation includes a three-dimensional volumetric model.
9. A system for performing probabilistic segmentation in anatomical image analysis, the system comprising:
a server storing instructions for performing probabilistic segmentation in anatomical image analysis; and
a processor configured to execute the instructions to perform a method including:
receiving a plurality of images of an anatomical structure;
receiving one or more geometric labels of the anatomical structure;
generating a parametrized representation of the anatomical structure based on the one or more geometric labels and the received plurality of images;
determining a distance from a point on a centerline of the anatomical structure to a point on a lumen boundary of the anatomical structure;
mapping a region of the parameterized representation to the determined distance;
receiving an image of a patient's anatomy; and
generating a probability distribution for a patient-specific segmentation boundary of the patient's anatomy, based on the mapping of the region of the parameterized representation to the determined distance.
10. The system of claim 9, wherein the system is further configured for:
generating a statistical confidence value based on the probability distribution.
11. The system of claim 9, wherein the system is further configured for:
generating the patient-specific segmentation boundary of the patient's anatomy, based on the probability distribution.
12. The system of claim 9, wherein the anatomical structure comprises a blood vessel and the patient's anatomy comprises a blood vessel of the patient's vasculature.
13. The system of claim 9, wherein the geometric labels include annotations of a vessel lumen boundary, centerline, surface, or a combination thereof.
14. The system of claim 9, wherein the system is further configured for:
generating a statistical confidence score of the determined distance.
15. The system of claim 9, wherein the system is further configured for:
generating the patient-specific segmentation boundary based on the determined distance.
16. A non-transitory computer readable medium for use on a computer system containing computer-executable programming instructions for a method of performing probabilistic segmentation in anatomical image analysis, the method comprising:
receiving a plurality of images of an anatomical structure;
receiving one or more geometric labels of the anatomical structure;
generating a parametrized representation of the anatomical structure based on the one or more geometric labels and the received plurality of images;
determining a distance from a point on a centerline of the anatomical structure to a point on a lumen boundary of the anatomical structure;
mapping a region of the parameterized representation to the determined distance;
receiving an image of a patient's anatomy; and
generating a probability distribution for a patient-specific segmentation boundary of the patient's anatomy, based on the mapping of the region of the parameterized representation to the determined distance.
17. The non-transitory computer readable medium of claim 16, the method further comprising:
generating a statistical confidence value based on the probability distribution.
18. The non-transitory computer readable medium of claim 16, the method further comprising:
generating the patient-specific segmentation boundary of the patient's anatomy, based on the probability distribution.
US15/852,119 2016-12-23 2017-12-22 Systems and methods for probabilistic segmentation in anatomical image processing Active 2038-05-10 US10600181B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US15/852,119 US10600181B2 (en) 2016-12-23 2017-12-22 Systems and methods for probabilistic segmentation in anatomical image processing
US16/790,037 US11443428B2 (en) 2016-12-23 2020-02-13 Systems and methods for probablistic segmentation in anatomical image processing
US17/817,737 US20220383495A1 (en) 2016-12-23 2022-08-05 Systems and methods for probablistic segmentation in anatomical image processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662438514P 2016-12-23 2016-12-23
US15/852,119 US10600181B2 (en) 2016-12-23 2017-12-22 Systems and methods for probabilistic segmentation in anatomical image processing

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/790,037 Continuation US11443428B2 (en) 2016-12-23 2020-02-13 Systems and methods for probablistic segmentation in anatomical image processing

Publications (2)

Publication Number Publication Date
US20180182101A1 US20180182101A1 (en) 2018-06-28
US10600181B2 true US10600181B2 (en) 2020-03-24

Family

ID=60972522

Family Applications (3)

Application Number Title Priority Date Filing Date
US15/852,119 Active 2038-05-10 US10600181B2 (en) 2016-12-23 2017-12-22 Systems and methods for probabilistic segmentation in anatomical image processing
US16/790,037 Active US11443428B2 (en) 2016-12-23 2020-02-13 Systems and methods for probablistic segmentation in anatomical image processing
US17/817,737 Pending US20220383495A1 (en) 2016-12-23 2022-08-05 Systems and methods for probablistic segmentation in anatomical image processing

Family Applications After (2)

Application Number Title Priority Date Filing Date
US16/790,037 Active US11443428B2 (en) 2016-12-23 2020-02-13 Systems and methods for probablistic segmentation in anatomical image processing
US17/817,737 Pending US20220383495A1 (en) 2016-12-23 2022-08-05 Systems and methods for probablistic segmentation in anatomical image processing

Country Status (4)

Country Link
US (3) US10600181B2 (en)
EP (2) EP4145391A1 (en)
JP (1) JP7134962B2 (en)
WO (1) WO2018119358A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11443428B2 (en) * 2016-12-23 2022-09-13 Heartflow, Inc. Systems and methods for probablistic segmentation in anatomical image processing
WO2023097312A1 (en) 2021-11-29 2023-06-01 Heartflow, Inc. Systems and methods for processing electronic images using user inputs
WO2023097314A1 (en) 2021-11-29 2023-06-01 Heartflow, Inc. Systems and methods for processing electronic images for physiology-compensated reconstruction

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10192129B2 (en) 2015-11-18 2019-01-29 Adobe Systems Incorporated Utilizing interactive deep learning to select objects in digital visual media
US11568627B2 (en) 2015-11-18 2023-01-31 Adobe Inc. Utilizing interactive deep learning to select objects in digital visual media
WO2017199245A1 (en) 2016-05-16 2017-11-23 Cathworks Ltd. System for vascular assessment
IL263066B2 (en) 2016-05-16 2023-09-01 Cathworks Ltd Vascular selection from images
US10762637B2 (en) * 2017-10-27 2020-09-01 Siemens Healthcare Gmbh Vascular segmentation using fully convolutional and recurrent neural networks
EP3511866A1 (en) * 2018-01-16 2019-07-17 Koninklijke Philips N.V. Tissue classification using image intensities and anatomical positions
US11244195B2 (en) * 2018-05-01 2022-02-08 Adobe Inc. Iteratively applying neural networks to automatically identify pixels of salient objects portrayed in digital images
EP3644275A1 (en) 2018-10-22 2020-04-29 Koninklijke Philips N.V. Predicting correctness of algorithmic segmentation
US20200202622A1 (en) * 2018-12-19 2020-06-25 Nvidia Corporation Mesh reconstruction using data-driven priors
US11282208B2 (en) 2018-12-24 2022-03-22 Adobe Inc. Identifying target objects using scale-diverse segmentation neural networks
US10813612B2 (en) 2019-01-25 2020-10-27 Cleerly, Inc. Systems and method of characterizing high risk plaques
CA3129213A1 (en) * 2019-02-06 2020-08-13 The University Of British Columbia Neural network image analysis
JP7296773B2 (en) * 2019-04-26 2023-06-23 キヤノンメディカルシステムズ株式会社 MEDICAL IMAGE PROCESSING APPARATUS AND MEDICAL IMAGE PROCESSING PROGRAM
CA3137030A1 (en) 2019-05-31 2020-12-03 Maryam ZIAEEFARD Method and processing device for training a neural network
JP2021051573A (en) * 2019-09-25 2021-04-01 キヤノン株式会社 Image processing apparatus, and method of controlling image processing apparatus
US10695147B1 (en) * 2019-12-04 2020-06-30 Oxilio Ltd Method and system for dental boundary determination
US11562203B2 (en) 2019-12-30 2023-01-24 Servicenow Canada Inc. Method of and server for training a machine learning algorithm for estimating uncertainty of a sequence of models
AU2021205821A1 (en) 2020-01-07 2022-07-21 Cleerly, Inc. Systems, methods, and devices for medical image analysis, diagnosis, risk stratification, decision making and/or disease tracking
US20220392065A1 (en) 2020-01-07 2022-12-08 Cleerly, Inc. Systems, methods, and devices for medical image analysis, diagnosis, risk stratification, decision making and/or disease tracking
US20210319558A1 (en) 2020-01-07 2021-10-14 Cleerly, Inc. Systems, methods, and devices for medical image analysis, diagnosis, risk stratification, decision making and/or disease tracking
EP3863022A1 (en) * 2020-02-06 2021-08-11 Siemens Healthcare GmbH Method and system for automatically characterizing liver tissue of a patient, computer program and electronically readable storage medium
US11335004B2 (en) 2020-08-07 2022-05-17 Adobe Inc. Generating refined segmentation masks based on uncertain pixels
CN112287946B (en) * 2020-09-23 2023-04-18 南方医科大学珠江医院 Automatic knee joint image omics feature extraction method based on MATLAB
JP2022090798A (en) * 2020-12-08 2022-06-20 キヤノンメディカルシステムズ株式会社 Analysis device, analysis system and analysis method
US11676279B2 (en) 2020-12-18 2023-06-13 Adobe Inc. Utilizing a segmentation neural network to process initial object segmentations and object user indicators within a digital image to generate improved object segmentations
EP4270305A1 (en) * 2020-12-24 2023-11-01 FUJIFILM Corporation Learning device, method, and program, and medical image processing device
US11875510B2 (en) 2021-03-12 2024-01-16 Adobe Inc. Generating refined segmentations masks via meticulous object segmentation
US20230289963A1 (en) 2022-03-10 2023-09-14 Cleerly, Inc. Systems, devices, and methods for non-invasive image-based plaque analysis and risk determination

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070014457A1 (en) 2005-07-13 2007-01-18 Marie-Pierre Jolly Method for knowledge based image segmentation using shape models
US20160267673A1 (en) 2015-03-10 2016-09-15 Siemens Aktiengesellschaft Systems and Method for Computation and Visualization of Segmentation Uncertainty in Medical Images

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3534009B2 (en) * 1999-09-24 2004-06-07 日本電気株式会社 Outline extraction method and apparatus
DE102010022307A1 (en) * 2010-06-01 2011-12-01 Siemens Aktiengesellschaft Method for checking segmentation of structure e.g. left kidney in medical image data, involves performing spatially resolved automatic determination of confidence values for positions on segmentation contour using property values
US8160357B2 (en) 2010-07-30 2012-04-17 Kabushiki Kaisha Toshiba Image segmentation
US9430827B2 (en) * 2013-05-31 2016-08-30 Siemens Aktiengesellschaft Segmentation of a calcified blood vessel
US9747525B2 (en) * 2014-06-16 2017-08-29 Siemens Healthcare Gmbh Method and system for improved hemodynamic computation in coronary arteries
EP4145391A1 (en) * 2016-12-23 2023-03-08 HeartFlow, Inc. Systems and methods for probabilistic segmentation in anatomical image processing

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070014457A1 (en) 2005-07-13 2007-01-18 Marie-Pierre Jolly Method for knowledge based image segmentation using shape models
US20160267673A1 (en) 2015-03-10 2016-09-15 Siemens Aktiengesellschaft Systems and Method for Computation and Visualization of Segmentation Uncertainty in Medical Images

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Mao et al. "Segmentation of Carotid Artery in Ultrasound Images." Proceedings of the 22nd Annual EMBS International Conference, Jul. 23, 2000, pp. 1734-1737 (Year: 2000). *
Phellan Renzo et al. "Medical image segmentation using object atlas versus object cloud models", Progress in Biomedical Optics and Imaging, SPIE-International Society For Optical Engineering, Belligham, WA, US, vol. 9415, Mar. 18, 2015, pp. 94151M-94151M.
Phellan Renzo et al. "Medical image segmentation using object atlas versus object cloud models", Progress in Biomedical Optics and Imaging, SPIE—International Society For Optical Engineering, Belligham, WA, US, vol. 9415, Mar. 18, 2015, pp. 94151M-94151M.

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11443428B2 (en) * 2016-12-23 2022-09-13 Heartflow, Inc. Systems and methods for probablistic segmentation in anatomical image processing
US20220383495A1 (en) * 2016-12-23 2022-12-01 Heartflow, Inc. Systems and methods for probablistic segmentation in anatomical image processing
WO2023097312A1 (en) 2021-11-29 2023-06-01 Heartflow, Inc. Systems and methods for processing electronic images using user inputs
WO2023097314A1 (en) 2021-11-29 2023-06-01 Heartflow, Inc. Systems and methods for processing electronic images for physiology-compensated reconstruction

Also Published As

Publication number Publication date
US20220383495A1 (en) 2022-12-01
EP4145391A1 (en) 2023-03-08
EP3559907A1 (en) 2019-10-30
JP2020503603A (en) 2020-01-30
US11443428B2 (en) 2022-09-13
US20180182101A1 (en) 2018-06-28
WO2018119358A1 (en) 2018-06-28
US20200184646A1 (en) 2020-06-11
JP7134962B2 (en) 2022-09-12

Similar Documents

Publication Publication Date Title
US11443428B2 (en) Systems and methods for probablistic segmentation in anatomical image processing
US20230196582A1 (en) Systems and methods for anatomic structure segmentation in image analysis
US11288808B2 (en) System and method for n-dimensional image segmentation using convolutional neural networks
US10366491B2 (en) Deep image-to-image recurrent network with shape basis for automatic vertebra labeling in large-scale 3D CT volumes
US20200242405A1 (en) Intelligent multi-scale medical image landmark detection
US11010630B2 (en) Systems and methods for detecting landmark pairs in images
CN109102490B (en) Automatic image registration quality assessment
JP6885517B1 (en) Diagnostic support device and model generation device
US9087370B2 (en) Flow diverter detection in medical imaging
EP3807845B1 (en) Atlas-based location determination of an anatomical region of interest
EP3722996A2 (en) Systems and methods for processing 3d anatomical volumes based on localization of 2d slices thereof
CN113287149A (en) Medical image analysis using machine learning and anatomical vectors
CN112862786B (en) CTA image data processing method, device and storage medium
CN112862785A (en) CTA image data identification method, device and storage medium
CN114708973B (en) Device and storage medium for evaluating human health
US11501442B2 (en) Comparison of a region of interest along a time series of images
US20240062857A1 (en) Systems and methods for visualization of medical records
CN114727800A (en) Image processing method and apparatus for object detection or recognition
Daryanani Left ventricle myocardium segmentation from 3d cardiac mr images using combined probabilistic atlas and graph cut-based approaches
Caban Generation and visualization of relational statistical deformation models for morphological image analysis

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: HEARTFLOW, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PETERSEN, PETER KERSTEN;SCHAAP, MICHIEL;GRADY, LEO;SIGNING DATES FROM 20180123 TO 20180125;REEL/FRAME:048483/0195

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: HAYFIN SERVICES LLP, UNITED KINGDOM

Free format text: SECURITY INTEREST;ASSIGNOR:HEARTFLOW, INC.;REEL/FRAME:055037/0890

Effective date: 20210119

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment: 4