WO2019042962A1 - Localization of anatomical structures in medical images - Google Patents

Localization of anatomical structures in medical images Download PDF

Info

Publication number
WO2019042962A1
WO2019042962A1 PCT/EP2018/073077 EP2018073077W WO2019042962A1 WO 2019042962 A1 WO2019042962 A1 WO 2019042962A1 EP 2018073077 W EP2018073077 W EP 2018073077W WO 2019042962 A1 WO2019042962 A1 WO 2019042962A1
Authority
WO
WIPO (PCT)
Prior art keywords
matches
ranked
processor
scored
localization
Prior art date
Application number
PCT/EP2018/073077
Other languages
French (fr)
Inventor
Thomas Blaffert
Tom BROSCH
Hannes NICKISCH
Jochen Peters
Alexander SCHMIDT-RICHBERG
Rolf Jürgen WEESE
Original Assignee
Koninklijke Philips N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips N.V. filed Critical Koninklijke Philips N.V.
Publication of WO2019042962A1 publication Critical patent/WO2019042962A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • A61N5/1048Monitoring, verifying, controlling systems and methods
    • A61N5/1049Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Definitions

  • CT computed tomography
  • PET positron emission tomography
  • SPECT single photon emission tomography
  • MRI magnetic resonance imaging
  • digital X-ray and/or other imaging modalities.
  • the localization of an anatomical structure(s) is a step in automated image segmentation.
  • this step has included matching a set of predetermined points in an anatomical shape model or template of a structure of interest with an image.
  • a template matching algorithm yields a weighted number corresponding to the number of points in the template that match the image for a set of locations within the image.
  • the location in the image with the highest match score is then used to localize the anatomical structure in the image for the segmentation.
  • the location with the highest match score may not correspond to the anatomical structure of interest, leading to incorrect anatomical structure localization and thereafter an erroneous segmentation and inefficient use of computation time and resources.
  • a system includes a computing system with a processor and a computer readable storage medium with computer readable instructions including a localizer with a re- ranker.
  • the processor is configured to execute the re-ranker instructions, which causes the re- ranker to receive a plurality of ranked or scored candidate matches, wherein a ranked or scored candidate match includes a set of points of a shape model, and rank the ranked or scored candidate matches based on a predetermined set of features.
  • the shape model represents an anatomical structure of interest.
  • a method includes receiving a plurality of ranked or scored candidate matches that include a set of points of a shape model.
  • the shape model represents an anatomical structure of interest.
  • the method further includes applying a predetermined set of features to the plurality of ranked or scored candidate matches thereby generating ranked matches, and applying a localization classifier to determine a validity of a match in the set of ranked matches to generate ranked valid matches.
  • a computer readable medium is encoded with computer executable instructions which when executed by a processor cause the processor to receive a plurality of ranked or scored candidate matches, wherein a ranked or scored candidate includes a set of points of a shape model that represents an anatomical structure of interest.
  • the instructions when executed by the processor, further cause the processor to apply a predetermined set of features to the scored matches to rank the plurality of ranked or scored candidate matches thereby generating ranked matches; and apply a localization classifier to determine a validity of a match in the set of ranked matches to generate ranked valid matches.
  • FIGURE 1 schematically illustrates an imaging system with a structure of interest localizer and localization classifier.
  • FIGURE 2 schematically illustrates an example of the localizer.
  • FIGURE 3 illustrates an example of a voting model point offset of a full heart localization.
  • FIGURE 4 illustrate a histogram showing the relative octant occupancy of FIGURE 3.
  • FIGURE 5 schematically illustrates an example of the localizer in connection with the localization classifier.
  • FIGURE 6 schematically illustrates another example of the localizer in connection with the localization classifier.
  • FIGURE 7 an example method in accordance with an embodiment herein. DETAILED DESCRIPTION OF EMBODIMENTS
  • FIGURE 1 schematically illustrates a system 100 including an imaging system 102 such as a CT scanner.
  • the imaging system 102 includes a generally stationary gantry 104 and a rotating gantry 106 which is rotatably supported by the stationary gantry 104 and rotates around an examination region 108 about a z-axis.
  • a subject support 110 such as a couch supports an object or subject in the examination region 108.
  • a radiation source 112, such as an x-ray tube, is rotatably supported by the rotating gantry 106, rotates with the rotating gantry 106, and emits radiation that traverses the examination region 108.
  • a radiation sensitive detector array 114 detects radiation traversing the examination region 108 and generates an electrical signal(s) (projection data) indicative thereof.
  • a reconstructor 116 receives the projection data from the detector array 114 reconstructs and generates three-dimensional volumetric image data.
  • a computing system serves as an operator console 118.
  • the console 118 includes a human readable output device such as a monitor and an input device such as a keyboard, mouse, etc.
  • Software resident on the console 118 allows the operator to interact with and/or operate the imaging system 102 via a graphical user interface (GUI) or otherwise.
  • GUI graphical user interface
  • the console 118 further includes a processor 120 (e.g., a microprocessor, a controller, a central processing unit, etc.) and a computer readable storage medium 122, which excludes non-transitory medium, and includes transitory medium such as a physical memory device, etc.
  • the computer readable storage medium 122 includes instructions 124, and the processor 120 executes these instructions.
  • the processor 120 executes one or more computer readable instructions 124 carried by a carrier wave, a signal and/or other transitory medium.
  • the processor 120 and the computer readable storage medium 122 are part of another computing system which is separate and distinct from the console 118.
  • the instructions 124 include a localizer 126, a localization classifier 128 and a segmentor 130.
  • the processor 120 in response to executing the localizer 126, automatically detects an anatomical structure of interest in the volumetric image data using a shape model representing the anatomical structure of interest (e.g., a kidney, the heart, etc.) and generates a plurality of candidate matches (localizations), where each candidate includes a set of points of the shape model that matched pixels and/or voxels in the image data.
  • a shape model representing the anatomical structure of interest
  • the localizer 126 re-orders / re-ranks the candidates, e.g., based on additional features, candidate validation, a quality metric, and/or other information.
  • the localization classifier 128 detects whether a particular localization is correct or valid (e.g., sufficiently close to the considered anatomical landmark) or invalid.
  • the localization classifier 128 uses parameters obtained by a training procedure to determine localization validity. In a variation, the localization classifier 128 is omitted.
  • the segmentor 130 segments the anatomical structure of interest in the volumetric image data using one of the ranked localizations such as a highest ranked localization of the re-ranked
  • FIGURE 2 illustrates an example of the localizer 126 with the segmentor 130.
  • the localizer includes a ranker 202 and a re-ranker 204.
  • the ranker 202 employs an algorithm that generates a plurality of match scores, each score based on matching points in a template with the volumetric image data.
  • GHT Generalized Hough Transform
  • the template used to match points is a shape model M .
  • the shape model M provides a description of features (e.g., gradient direction, local gray value constraints, etc.) within a given radius of a reference point.
  • This description is a shape outline that is represented by a discretized set of edge points with a known geometric offset d from the reference point and a known normalized edge gradient direction n.
  • the combination of offset d and edge orientation n is a shape model point 7 i and is represented by Equation 1 :
  • Equation 2 i indexes all model points in M . These model points can be matched with existing edge points in the volumetric image data. Placing M at a certain test location x, the model points are placed within their encoded offsets in the image, and a match per model point P " ⁇ is claimed if the volumetric image data has an edge point within a predetermined radius to x + dj with an edge orientation close to Per match, the corresponding model point votes for the test location x, optionally with some weight i ⁇ differing from 1. The votes are accumulated as a match score for the test location x (H(x)). The match score is represented by Equation 2:
  • Equation 3 Equation 3
  • Equation 3 the image has an edge point close to x + d ⁇ with an edge orientation close to n t herwise
  • an edge detector e.g., Canny
  • e k is a discretized edge detector
  • m k is a discretized orientation. Additional processing with threshold and filters suppresses noise edges.
  • the best location x* is defined by the maximum Hough Vote or match score. Placing the model M at x* results in the maximum (weighted) number of model points Pi that match
  • Equation 5 An algorithm selects the shape model points that match the orientation by mapping an R- table which is defined as m k ⁇ ⁇ Pi] mk . For each 7 i the discretized location candidate x is calculated by Equation 5:
  • These locations x are the center of a Hough cell (a voxel), over a Hough space (a voxel grid).
  • H( ) is incremented by Wj, wherein the increments are made in a Hough accumulator (an
  • the set of shape model points and the R-table are the components of the shape model.
  • the re-ranker 204 receives scored matches 206 from the ranker 202 and applies
  • the scored matches 206 may evaluate collective properties of matching points (e.g., average model point location).
  • the set to be processed may be interactively selected by a user or may be automatically selected (e.g., by exceeding a threshold of match scores) by the re-ranker 204.
  • the re-ranker 204 identifies the shape model points that voted in a specific accumulator cell.
  • the automated localizer runs the ranking procedure a second time, where indices to voting shape model points are stored whenever the respective accumulator cell is hit. This process is repeated for all locations within a given parameter (e.g., number of votes).
  • the re-ranker 204 applies features such as a scalar feature 208, a histogram feature 210, a model point weights feature 212 and/or other feature (collectively referred to as additional features 214) to the identified m voting model points of a solution and the identified n shape model points, utilizing in particular the offset vector dj from the localization center and the edge gradient direction of shape model point i.
  • the scalar feature 208 identifies invalid solutions by a deviation from the average model point distribution, both spatially and in gradient direction.
  • the scalar feature 208 identifies invalid solutions by applying an algorithm such as a confidence algorithm 216, an offset distance algorithm 218, a gradient distance algorithm 220 to the identified m voting model points of a solution and the identified n shape model points, and/or other algorithm (collectively referred to as algorithms 222).
  • the scalar feature 208 may employ a confidence algorithm 216, an offset distance algorithm 218, a gradient distance algorithm 220, and/or other algorithm, individually or in any combination to determine the validity of a solution.
  • the confidence algorithm 216 determines the relative number of votes (as a percentage), for the test locations and is represented by Equation 6:
  • the confidence algorithm 216 determines validity of a solution by determining if the solution exceeds a threshold percentage of voting model points m of the shape model points n of a test location (e.g., 50%, 60%, 70%). The confidence algorithm 216 ranks the valid solutions based on the highest confidence.
  • the offset distance algorithm 218 determines the absolute value of the difference between the average voting point offset and the average model point offset and is represented by Equation 7:
  • Equation 8 o is the average point offset represented by Equation 8.
  • the offset distance algorithm 218 determines validity of a solution by determining if the solution is below a threshold offset (e.g., 1, 2, 3). The offset distance algorithm 218 ranks the valid solutions based on the lowest offset difference.
  • a threshold offset e.g. 1, 2, 3
  • the gradient distance algorithm 220 determines the absolute value of the difference between the average voting gradient and the average model gradient and is represented by Equation 10:
  • is the average voting gradient represented by Equation 11 :
  • Equation 12 The gradient distance algorithm 220 determines validity of a solution by determining if the solution is below a threshold offset (e.g., 1 , 2, 3). The gradient distance algorithm 220 ranks the valid solutions based on the lowest gradient difference.
  • a threshold offset e.g. 1 , 2, 3
  • the histogram feature 210 bins the occurrence of each voting model point into one of 8- bins in a linear 8-bin histogram.
  • each octant is associated with a bin in the linear 8-bin histogram, which as previously stated, is filled from the occurrences of each voting model point in one of the octants.
  • the histogram represents the spatial distribution of shape model points.
  • FIGURE 3 depicts an example of a voting model point offset of a full heart localization distributed over the 8 octants with a first axis 302, a second axis, 304, and a third axis 306 of a Cartesian coordinate system, where each graphical element 308 represents an occurrence of a voting model point within a given octant.
  • FIGURE 4 depicts a histogram showing the relative octant occupancy of FIGURE 3 in percent wherein a first axis 402 is the bin number corresponding to a given octant and a second axis 404 is a percentage of relative occupancy within a given bin.
  • the normalized offset distribution histogram (h 0 ) (Equation 15) of all m voting model points, is compared to a reference offset distribution that is calculated from all n shape model points and is stored as a normalized histogram (h r ) (Equation 16).
  • histograms ⁇ ⁇ and h p may be calculated accordingly from the voting gradient vectors n ⁇ - j and the shape model gradient vectors n t .
  • the histogram feature 210 also compares the tested histogram to the reference histograms.
  • the offset octants filling difference is represented as Equation 17:
  • Equation 18 Equation 18
  • the histogram feature 210 determines validity of a solution by comparing a binned octant occupancy count to a threshold for a corresponding bin (e.g., 10% ⁇ 3%, 20% ⁇ 5%, 40% ⁇ 2%). The histogram feature 210 ranks the valid solutions based on compared bin counts.
  • the model point weights feature 212 assembles model points of the shape model from independent sets of edge points in training images during training.
  • the model point weights feature 212 introduces weights (u ⁇ ) to the localization classifier 128 training.
  • the weights are the number of occurrences of each shape model point P t in positive training cases (e.g., those containing the considered landmark with valid localization).
  • the weights are represented in Equation 19:
  • weights w t are then added to the previously defined voting gradient and model gradient to obtain a weighted confidence which is relative number of weighted votes in a percentage for all m voting points and all n shape model points and is represented as Equation 20:
  • the model points weights feature 212 determines validity by determining if a solution exceeds a weighted confidence threshold of a test location (e.g., 50%, 60%, 70%).
  • the model point weights algorithm 212 ranks the valid solutions based on the highest confidence.
  • model points weight feature 212 obtains the weighted offset distance of a given test location as represented as Equation 21
  • Equation 23
  • the model points weights feature 212 determines validity of a solution by determining if the solution is below a threshold offset (e.g., 1, 2, 3). The model points weights feature 212 ranks the valid solutions based on the lowest distance offset.
  • a threshold offset e.g. 1, 2, 3
  • the model point weights feature 212 obtains the weighted offset gradient distance of a given test location as represented as Equation 24 and a weighted gradient distance of a given test location as represented by Equation 24
  • Equation 26 p w is the average weighted model gradient as represented by Equation 26.
  • the model point weights feature 212 determines validity of a solution by determining if the solution is below a threshold offset (e.g., 1 , 2, 3). The model points weights feature 212 ranks the valid solutions based on the lowest distance offset.
  • a threshold offset e.g. 1 , 2, 3
  • the unweighted average voting values o and ⁇ or the unweighted average reference values r and p are included in the distance calculations f dw and f gw .
  • the weighted averages are discussed herein.
  • model point weights feature 212 accumulates weighted histograms as represented by Equations 27 and 28.
  • the weighted gradient distribution histograms h ww and h pw are calculated by the model point weights feature 212.
  • the two comparing weighted values are the weighted offset octants filling difference and the weighted gradient octants filling difference as represented by equations 29 and 30 respectively.
  • the model point weights feature 212 determines validity of a solution by comparing a binned octant occupancy count to a threshold for a corresponding bin (e.g., 10% ⁇ 3%, 20% ⁇ 5%, 40% ⁇ 2%). The model point weights feature 212 ranks the valid solutions based on compared bin counts.
  • the segmentor 130 receives ranked matches 224 from the re-ranker 204 and segments the anatomical structure of interest in the volumetric image data using one of the ranked matches 224 such as a highest ranked match of the ranked matches 224 thereby producing an image segmentation 226.
  • FIGURE 5 illustrates an example of the localizer 126 from FIGURE 2 in connection with the localization classifier 128 and the segmentor 130.
  • the localization classifier 128 includes a single confidence threshold classifier 502, a support vector machine classifier 504, and a grid searcher 506.
  • the localizer 126 ranks the localizations.
  • the localization classifier receives the ranked matches 224 and utilizes the single threshold classifier 502, the support vector machine classifier 504, and the grid searcher 506 to determine localization validity.
  • the single confidence threshold classifier 502, the support vector machine classifier 504, and the grid searcher 506 are subjected to support vector machine learning as the method of training wherein the framework is implemented around the Library for Support Vector Machines (LIBSVM) software library functions.
  • the single confidence threshold classifier 502 determines the most optimal threshold of the minimum number of votes that indicate a valid localization by constructing a sorted list of confidence values from all training cases. The list is traversed in one direction, and the confidence value with the largest number of correctly classified cases is selected as the classification threshold. This threshold may be used by the confidence algorithm 216.
  • the support vector machine classifier 504 determines the validity of a shape finder solution by calculating a prediction function, wherein the sign of the function value is used to predict the class.
  • a vector calculated from the offset and gradient values store in a shape model point that have voted for a solution, as described above, is input to the support vector machine classifier 504.
  • Multiple support vector machine classifier 504 variants provided by the LIBSVM library may be used to classify the shape.
  • C-SVM standard support vector classification with a regularization parameter C
  • I and indicator vectors y £ R ( for two classes with y i E ⁇ 1,— 1 ⁇ separated by the decision function s,gn( T 0( i ) + b) wherein w and b are weights and ⁇ ( ⁇ ) is a function that maps the vector Xi into a feature space.
  • the function balances a maximal margin between the two classes against the total distance of training points lying on the wrong side of the decision surface (found by solving a quadratic optimization).
  • the feature space has a large dimension
  • practical training algorithms calculate the elements of a kernel k ⁇ xi, Xj) ⁇ O(X j ) 7 ( ⁇ ) rather than ⁇ ( ⁇ ) and solve a dual optimization problem that is equivalent to the primal problem.
  • hyper parameters ⁇ as a factor in the polynomial, radial basis function, and sigmoid kernel, the degree d in the polynomial kernel, and the offset coefficient r in the polynomial and sigmoid kernel.
  • All hyper parameters are preset before an SVM training run, and not optimized by the training algorithm.
  • each parameter set is cross-validated, where the optimal hyper parameters are those resulting in the highest average accuracy over all cross-validation test folds. In case of equal accuracies, the parameter closest to the center of the search range is selected. For cross- validation the final result is that of the cross-validation with the best hyper parameters. If a classifier is trained from all cases, the grid search for optimal hyper parameters with cross- validation precedes the training.
  • the segmentor 130 receives ranked valid matches 508 from the re-ranker 204 and segments the anatomical structure of interest in the volumetric image data using one of the ranked valid matches 508 such as a highest ranked match of the ranked valid matches 508.
  • FIGURE 6 illustrates a variation of the localizer 126.
  • the localizer 126 includes the ranker 202, the re-ranker 204 and a quality function determiner 602.
  • the ranker 202 produces a set of scored matches as described in FIGURE 2.
  • the quality function determiner 602 receives the scored matches 206, calculates a quality function for the scored matches 206 and ranks the scored matches 206 according the quality function.
  • a validity classification procedure based on a decision function that combines the additional features 214 and decides the validity of a particular match by the sign of a trained decision function serves as the quality function to produce quality function matches 604.
  • the re-ranker 204 receives the quality function matches, and as described in FIGURE 2 ranks the quality function matches to produce re-ranked quality function matches 606.
  • the re-ranked quality function matches 606 serve as an input for the segmentor 130, as describe in FIGURE 2 and/or as an input for the localization classifier 128 as described in FIGURE 5.
  • FIGURE 7 illustrates an example method in accordance with an embodiment herein.
  • the ordering of the following acts is for explanatory purposes and is not limiting. As such, one or more of the acts can be performed in a different order, including, but not limited to, concurrently. Furthermore, one or more of the acts may be omitted and/or one or more other acts may be added.
  • an anatomical structure of interest is detected in the volumetric image data using a shape model representing the anatomical structure of interest, as described herein and/or otherwise, and a plurality of candidate matches where each candidate match includes a set of points of the shape model is generated.
  • a match score is generated for the plurality of candidate matches, where each candidate includes a set of points of the shape model that matched corresponding points in the image, as described herein and/or otherwise.
  • the validity of a match is determined, as described herein and/or otherwise.
  • the matches are ranked based on the validity of the match, as described herein and/or otherwise.
  • additional features 214 are applied to the matches to re-rank the matches, as described herein and/or otherwise.
  • a validity of a match is determined, as described herein and/or otherwise.
  • a validated match is used to segment the anatomical structure of interest from the volumetric image data.
  • the above may be implemented by way of computer readable instructions, encoded or embedded on computer readable storage medium (which excludes transitory medium), which, when executed by a computer processor(s), cause the processor(s) to carry out the described acts. Additionally or alternatively, at least one of the computer readable instructions is carried by a signal, carrier wave or other transitory medium.

Abstract

A system (100) includes a computing system (118) with a processor (120) and a computer readable storage medium (122) with computer readable instructions (124) including a localizer (126) with a re-ranker (204). The processor is configured to execute the re-ranker instructions, which causes the re-ranker to receive a plurality of ranked or scored candidate matches, wherein a ranked or scored candidate match includes a set of points of a shape model, and rank the ranked or scored candidate matches based on a predetermined set of features. The shape model represents an anatomical structure of interest.

Description

LOCALIZATION OF ANATOMICAL STRUCTURES IN MEDICAL IMAGES
FIELD OF THE INVENTION
The following generally relates to imaging and more specifically to a localization of anatomical structures in medical images, and finds particular application to computed tomography (CT), but is also amenable to other imaging modalities such as positron emission tomography (PET), single photon emission tomography (SPECT), magnetic resonance imaging (MRI), digital X-ray, and/or other imaging modalities.
BACKGROUND OF THE INVENTION
The localization of an anatomical structure(s) is a step in automated image segmentation. With one approach, this step has included matching a set of predetermined points in an anatomical shape model or template of a structure of interest with an image. Generally, a template matching algorithm yields a weighted number corresponding to the number of points in the template that match the image for a set of locations within the image. The location in the image with the highest match score is then used to localize the anatomical structure in the image for the segmentation. Unfortunately, the location with the highest match score may not correspond to the anatomical structure of interest, leading to incorrect anatomical structure localization and thereafter an erroneous segmentation and inefficient use of computation time and resources.
SUMMARY OF THE INVENTION
Aspects described herein address the above-referenced problems and/or others.
In one aspect, a system includes a computing system with a processor and a computer readable storage medium with computer readable instructions including a localizer with a re- ranker. The processor is configured to execute the re-ranker instructions, which causes the re- ranker to receive a plurality of ranked or scored candidate matches, wherein a ranked or scored candidate match includes a set of points of a shape model, and rank the ranked or scored candidate matches based on a predetermined set of features. The shape model represents an anatomical structure of interest. In another aspect, a method includes receiving a plurality of ranked or scored candidate matches that include a set of points of a shape model. The shape model represents an anatomical structure of interest. The method further includes applying a predetermined set of features to the plurality of ranked or scored candidate matches thereby generating ranked matches, and applying a localization classifier to determine a validity of a match in the set of ranked matches to generate ranked valid matches.
In another aspect, a computer readable medium is encoded with computer executable instructions which when executed by a processor cause the processor to receive a plurality of ranked or scored candidate matches, wherein a ranked or scored candidate includes a set of points of a shape model that represents an anatomical structure of interest. The instructions, when executed by the processor, further cause the processor to apply a predetermined set of features to the scored matches to rank the plurality of ranked or scored candidate matches thereby generating ranked matches; and apply a localization classifier to determine a validity of a match in the set of ranked matches to generate ranked valid matches.
Those skilled in the art will recognize still other aspects of the present application upon reading and understanding the attached description.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention may take form in various components and arrangements of components and in various steps and arrangements of steps. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention.
FIGURE 1 schematically illustrates an imaging system with a structure of interest localizer and localization classifier.
FIGURE 2 schematically illustrates an example of the localizer.
FIGURE 3 illustrates an example of a voting model point offset of a full heart localization.
FIGURE 4 illustrate a histogram showing the relative octant occupancy of FIGURE 3.
FIGURE 5 schematically illustrates an example of the localizer in connection with the localization classifier. FIGURE 6 schematically illustrates another example of the localizer in connection with the localization classifier.
FIGURE 7 an example method in accordance with an embodiment herein. DETAILED DESCRIPTION OF EMBODIMENTS
FIGURE 1 schematically illustrates a system 100 including an imaging system 102 such as a CT scanner. The imaging system 102 includes a generally stationary gantry 104 and a rotating gantry 106 which is rotatably supported by the stationary gantry 104 and rotates around an examination region 108 about a z-axis. A subject support 110, such as a couch supports an object or subject in the examination region 108. A radiation source 112, such as an x-ray tube, is rotatably supported by the rotating gantry 106, rotates with the rotating gantry 106, and emits radiation that traverses the examination region 108. A radiation sensitive detector array 114 detects radiation traversing the examination region 108 and generates an electrical signal(s) (projection data) indicative thereof. A reconstructor 116 receives the projection data from the detector array 114 reconstructs and generates three-dimensional volumetric image data.
In the illustrated embodiment, a computing system serves as an operator console 118. The console 118 includes a human readable output device such as a monitor and an input device such as a keyboard, mouse, etc. Software resident on the console 118 allows the operator to interact with and/or operate the imaging system 102 via a graphical user interface (GUI) or otherwise. The console 118 further includes a processor 120 (e.g., a microprocessor, a controller, a central processing unit, etc.) and a computer readable storage medium 122, which excludes non-transitory medium, and includes transitory medium such as a physical memory device, etc. The computer readable storage medium 122 includes instructions 124, and the processor 120 executes these instructions. Additionally or alternatively, the processor 120 executes one or more computer readable instructions 124 carried by a carrier wave, a signal and/or other transitory medium. In a variation, the processor 120 and the computer readable storage medium 122 are part of another computing system which is separate and distinct from the console 118.
In the illustrated embodiment, the instructions 124 include a localizer 126, a localization classifier 128 and a segmentor 130. The processor 120, in response to executing the localizer 126, automatically detects an anatomical structure of interest in the volumetric image data using a shape model representing the anatomical structure of interest (e.g., a kidney, the heart, etc.) and generates a plurality of candidate matches (localizations), where each candidate includes a set of points of the shape model that matched pixels and/or voxels in the image data. As described in greater detail below, in one example, the localizer 126 re-orders / re-ranks the candidates, e.g., based on additional features, candidate validation, a quality metric, and/or other information. The localization classifier 128 detects whether a particular localization is correct or valid (e.g., sufficiently close to the considered anatomical landmark) or invalid. As discussed in further detail below, the localization classifier 128 uses parameters obtained by a training procedure to determine localization validity. In a variation, the localization classifier 128 is omitted. The segmentor 130 segments the anatomical structure of interest in the volumetric image data using one of the ranked localizations such as a highest ranked localization of the re-ranked
localizations.
FIGURE 2 illustrates an example of the localizer 126 with the segmentor 130. In this example, the localizer includes a ranker 202 and a re-ranker 204. The ranker 202 employs an algorithm that generates a plurality of match scores, each score based on matching points in a template with the volumetric image data.
An example of such an algorithm is a Generalized Hough Transform (GHT). In a GHT, the template used to match points is a shape model M . The shape model M provides a description of features (e.g., gradient direction, local gray value constraints, etc.) within a given radius of a reference point. This description is a shape outline that is represented by a discretized set of edge points with a known geometric offset d from the reference point and a known normalized edge gradient direction n. The combination of offset d and edge orientation n is a shape model point 7i and is represented by Equation 1 :
Equation 1 :
Ά = {d ii}
where i indexes all model points in M . These model points can be matched with existing edge points in the volumetric image data. Placing M at a certain test location x, the model points are placed within their encoded offsets in the image, and a match per model point P " ^ is claimed if the volumetric image data has an edge point within a predetermined radius to x + dj with an edge orientation close to Per match, the corresponding model point votes for the test location x, optionally with some weight i^ differing from 1. The votes are accumulated as a match score for the test location x (H(x)). The match score is represented by Equation 2:
Equation 2:
Figure imgf000007_0001
wherein the model point specific weight wt and h(x + dit n ) represented by Equation 3:
Equation 3: the image has an edge point close to x + d{with an edge orientation close to nt
Figure imgf000007_0002
herwise
when the volumetric image data has an edge point within a predetermined radius to x + dj with an edge orientation close to Suitable edge points represented by equation 4:
Equation 4:
£k = {ek, mk},
of the image are obtained from an edge detector (e.g., Canny), wherein ek is a discretized
location and mk is a discretized orientation. Additional processing with threshold and filters suppresses noise edges.
After calculating H( ) (the Hough Vote or match score) for a set of discrete test locations x, the best location x* is defined by the maximum Hough Vote or match score. Placing the model M at x* results in the maximum (weighted) number of model points Pi that match
detected edges in the image.
An algorithm selects the shape model points that match the orientation by mapping an R- table which is defined as mk <→ {Pi]mk. For each 7i the discretized location candidate x is calculated by Equation 5:
Equation 5:
— t *
These locations x are the center of a Hough cell (a voxel), over a Hough space (a voxel grid).
H( ) is incremented by Wj, wherein the increments are made in a Hough accumulator (an
accumulator array that covers the Hough space). The set of shape model points and the R-table are the components of the shape model.
The re-ranker 204 receives scored matches 206 from the ranker 202 and applies
additional features, discussed in further detail below, to the scored matches 206 to evaluate collective properties of matching points (e.g., average model point location). The set to be processed may be interactively selected by a user or may be automatically selected (e.g., by exceeding a threshold of match scores) by the re-ranker 204.
The re-ranker 204 identifies the shape model points that voted in a specific accumulator cell. The automated localizer runs the ranking procedure a second time, where indices to voting shape model points are stored whenever the respective accumulator cell is hit. This process is repeated for all locations within a given parameter (e.g., number of votes). The re-ranker 204 applies features such as a scalar feature 208, a histogram feature 210, a model point weights feature 212 and/or other feature (collectively referred to as additional features 214) to the identified m voting model points of a solution and the identified n shape model points, utilizing in particular the offset vector dj from the localization center and the edge gradient direction of shape model point i.
The scalar feature 208 identifies invalid solutions by a deviation from the average model point distribution, both spatially and in gradient direction. The scalar feature 208 identifies invalid solutions by applying an algorithm such as a confidence algorithm 216, an offset distance algorithm 218, a gradient distance algorithm 220 to the identified m voting model points of a solution and the identified n shape model points, and/or other algorithm (collectively referred to as algorithms 222). The scalar feature 208 may employ a confidence algorithm 216, an offset distance algorithm 218, a gradient distance algorithm 220, and/or other algorithm, individually or in any combination to determine the validity of a solution.
The confidence algorithm 216 determines the relative number of votes (as a percentage), for the test locations and is represented by Equation 6:
Equation 6:
m
£ = - 100,
n
where m is the number of voting model points of a test location and n is the number of shape model points. The confidence algorithm 216 determines validity of a solution by determining if the solution exceeds a threshold percentage of voting model points m of the shape model points n of a test location (e.g., 50%, 60%, 70%). The confidence algorithm 216 ranks the valid solutions based on the highest confidence.
For the offset distance algorithm 218 and the gradient distance algorithm 220, the voting model points j form a subset of the shape model points i and are stored as index array =1...m. The offset distance algorithm 218 determines the absolute value of the difference between the average voting point offset and the average model point offset and is represented by Equation 7:
Equation 7:
fa = \\ - r\\,
wherein o is the average point offset represented by Equation 8:
Equation 8:
Figure imgf000009_0001
and wherein r is the average model point offset represented by Equat
Equation 9:
Figure imgf000009_0002
The offset distance algorithm 218 determines validity of a solution by determining if the solution is below a threshold offset (e.g., 1, 2, 3). The offset distance algorithm 218 ranks the valid solutions based on the lowest offset difference.
The gradient distance algorithm 220 determines the absolute value of the difference between the average voting gradient and the average model gradient and is represented by Equation 10:
Equation 10:
fg = ΙΙ ω - p ||,
wherein ω is the average voting gradient represented by Equation 11 :
Equation 11 :
Figure imgf000009_0003
and wherein p is the average model gradient represented by Equation 12
Equation 12:
Figure imgf000009_0004
The gradient distance algorithm 220 determines validity of a solution by determining if the solution is below a threshold offset (e.g., 1 , 2, 3). The gradient distance algorithm 220 ranks the valid solutions based on the lowest gradient difference.
The histogram feature 210 bins the occurrence of each voting model point into one of 8- bins in a linear 8-bin histogram. The average model point offset r as defined in Equation 9 as a center point, the coordinate space within a given radius is divided into 8 spatial 3D octants, each corresponding to one of the 8 -bins in the 8-bin linear histogram, wherein the signs of the coordinate difference vector dt— r determines each octant boundary. By writing the difference vector with three coordinates as dt— r≡ ( ^ yit z ) and representing the sign function as Equation 13:
Figure imgf000010_0001
where the octants are the indices represented in Equation 14:
Equation 14:
kt≡ k(di - r) = s xt) + 2s(yt) + 4s(z;), kt = 0, ... , 7.
With the indices represented by Equation 14, each octant is associated with a bin in the linear 8-bin histogram, which as previously stated, is filled from the occurrences of each voting model point in one of the octants. The histogram represents the spatial distribution of shape model points. FIGURE 3 depicts an example of a voting model point offset of a full heart localization distributed over the 8 octants with a first axis 302, a second axis, 304, and a third axis 306 of a Cartesian coordinate system, where each graphical element 308 represents an occurrence of a voting model point within a given octant. FIGURE 4 depicts a histogram showing the relative octant occupancy of FIGURE 3 in percent wherein a first axis 402 is the bin number corresponding to a given octant and a second axis 404 is a percentage of relative occupancy within a given bin. Returning to FIGURE 2, the normalized offset distribution histogram (h0) (Equation 15) of all m voting model points, is compared to a reference offset distribution that is calculated from all n shape model points and is stored as a normalized histogram (hr) (Equation 16).
Equation 15:
Figure imgf000010_0002
and
n 16:
Figure imgf000011_0001
Similarly, histograms ιω and hp may be calculated accordingly from the voting gradient vectors n^-j and the shape model gradient vectors nt. The histogram feature 210 also compares the tested histogram to the reference histograms. The offset octants filling difference is represented as Equation 17:
Equation 17:
Figure imgf000011_0002
and the gradient octants filling difference is represented as Equation 18 :
Equation 18:
Figure imgf000011_0003
The histogram feature 210 determines validity of a solution by comparing a binned octant occupancy count to a threshold for a corresponding bin (e.g., 10% ±3%, 20% ±5%, 40% ±2%). The histogram feature 210 ranks the valid solutions based on compared bin counts.
The above features could be extended by taking the x/y/z components of the distances and the 8 octant fill bins as additional feature elements. The model point weights feature 212 assembles model points of the shape model from independent sets of edge points in training images during training. The model point weights feature 212 introduces weights (u^) to the localization classifier 128 training. The weights are the number of occurrences of each shape model point Pt in positive training cases (e.g., those containing the considered landmark with valid localization). The weights are represented in Equation 19:
Equation 19:
∑(1, if model point Pt votes in positive training case k
\ , otherwise
k
wherein k runs over all indices in the training set. In one embodiment, the weights wt are then added to the previously defined voting gradient and model gradient to obtain a weighted confidence which is relative number of weighted votes in a percentage for all m voting points and all n shape model points and is represented as Equation 20:
Figure imgf000012_0001
The model points weights feature 212 determines validity by determining if a solution exceeds a weighted confidence threshold of a test location (e.g., 50%, 60%, 70%). The model point weights algorithm 212 ranks the valid solutions based on the highest confidence.
In another embodiment, the model points weight feature 212 obtains the weighted offset distance of a given test location as represented as Equation 21
Equation 21 :
Figure imgf000012_0002
wherein ow is the average weighted voting point offset as represented by Equation 22
Equatio
Figure imgf000012_0003
and wherein rw is the average weighted model point offset as represented by Equat
Equation 23 :
Figure imgf000012_0004
The model points weights feature 212 determines validity of a solution by determining if the solution is below a threshold offset (e.g., 1, 2, 3). The model points weights feature 212 ranks the valid solutions based on the lowest distance offset.
In yet another embodiment, the model point weights feature 212 obtains the weighted offset gradient distance of a given test location as represented as Equation 24 and a weighted gradient distance of a given test location as represented by Equation 24
Equation 24:
Figure imgf000012_0005
wherein cow is the average weighted voting gradient as represented by Equation 25
Equation 25:
1
= fym ... Λ " / . . imniU]> and wherein pw is the average weighted model gradient as represented by Equation 26.
Figure imgf000013_0001
The model point weights feature 212 determines validity of a solution by determining if the solution is below a threshold offset (e.g., 1 , 2, 3). The model points weights feature 212 ranks the valid solutions based on the lowest distance offset.
In another embodiment, the unweighted average voting values o and ω or the unweighted average reference values r and p are included in the distance calculations fdw and fgw . For the purpose of brevity, only the weighted averages are discussed herein.
In another embodiment, model point weights feature 212 accumulates weighted histograms as represented by Equations 27 and 28.
Equation 27:
Λ
Figure imgf000013_0002
How - , L i - 7
- oU, ... , /, and
Equation 28: w - wO - n ^, ί= 1 {θ, if ki≠ l) ' "" '
Similarly, the weighted gradient distribution histograms hww and hpw are calculated by the model point weights feature 212. The two comparing weighted values are the weighted offset octants filling difference and the weighted gradient octants filling difference as represented by equations 29 and 30 respectively.
Equation 29:
Figure imgf000013_0003
and
Equation 30:
Figure imgf000013_0004
The model point weights feature 212 determines validity of a solution by comparing a binned octant occupancy count to a threshold for a corresponding bin (e.g., 10% ±3%, 20% ±5%, 40% ±2%). The model point weights feature 212 ranks the valid solutions based on compared bin counts.
The segmentor 130 receives ranked matches 224 from the re-ranker 204 and segments the anatomical structure of interest in the volumetric image data using one of the ranked matches 224 such as a highest ranked match of the ranked matches 224 thereby producing an image segmentation 226.
FIGURE 5 illustrates an example of the localizer 126 from FIGURE 2 in connection with the localization classifier 128 and the segmentor 130. In this example, the localization classifier 128 includes a single confidence threshold classifier 502, a support vector machine classifier 504, and a grid searcher 506. As previously discussed, the localizer 126 ranks the localizations. The localization classifier receives the ranked matches 224 and utilizes the single threshold classifier 502, the support vector machine classifier 504, and the grid searcher 506 to determine localization validity.
The single confidence threshold classifier 502, the support vector machine classifier 504, and the grid searcher 506 are subjected to support vector machine learning as the method of training wherein the framework is implemented around the Library for Support Vector Machines (LIBSVM) software library functions. The single confidence threshold classifier 502 determines the most optimal threshold of the minimum number of votes that indicate a valid localization by constructing a sorted list of confidence values from all training cases. The list is traversed in one direction, and the confidence value with the largest number of correctly classified cases is selected as the classification threshold. This threshold may be used by the confidence algorithm 216.
The support vector machine classifier 504 determines the validity of a shape finder solution by calculating a prediction function, wherein the sign of the function value is used to predict the class. A vector calculated from the offset and gradient values store in a shape model point that have voted for a solution, as described above, is input to the support vector machine classifier 504. Multiple support vector machine classifier 504 variants provided by the LIBSVM library may be used to classify the shape. For the sake of brevity, the standard support vector classification with a regularization parameter C (C-SVM) is discussed in further detail below. The given training feature vectors e M 1, i = 1, ... , I and indicator vectors y £ R( for two classes with yi E {1,— 1} separated by the decision function s,gn( T0( i) + b) wherein w and b are weights and Φ(χι) is a function that maps the vector Xi into a feature space.
The function balances a maximal margin between the two classes against the total distance of training points lying on the wrong side of the decision surface (found by solving a quadratic optimization). In one embodiment, the feature space has a large dimension, practical training algorithms calculate the elements of a kernel k {xi, Xj) ≡ O(Xj)7 (χ ) rather than Φ( ί) and solve a dual optimization problem that is equivalent to the primal problem.
C-support vector classification solves the primal optimization problem represented as Equation 31 :
Equation 31 :
1
min— wTw + C > ξέ, subject to yi (wT<P(Xi) + b)≥ 1 - t≥ 0, ί = 1, ... , I, wherein C is a regularization parameter balancing generalization error and training error. Four different Kernels are available within the LIBSVM: linear (Equation 32), polynomial (Equation 33), radial basis function (Equation 34), and sigmoid (Equation 35).
Equation 32:
Equation 33:
K(xi, Xj) = (yx xj + r)d , γ > 0, d ε N+,
Equation 34:
K xi, Xj) = exp (-]/|| i - X/ ||2) , K > 0,
and
Equation 35:
K(x Xj) = tanh( x{xj + r),
with the hyper parameters γ as a factor in the polynomial, radial basis function, and sigmoid kernel, the degree d in the polynomial kernel, and the offset coefficient r in the polynomial and sigmoid kernel. All hyper parameters are preset before an SVM training run, and not optimized by the training algorithm. The grid searcher 506 searches for the optimal parameters and features. For a single training run the penalty factor C and the factor γ are fixed, but different values lead to different classification accuracies. Optimal values for these hyper parameters are determined with a grid search on discrete combinations. The grid search on exponentially varies the values,/ = bl, i = s, ... , f, with a double type base b, the integer exponent i, the start exponent s, and the finish exponent Each parameter set is cross-validated, where the optimal hyper parameters are those resulting in the highest average accuracy over all cross-validation test folds. In case of equal accuracies, the parameter closest to the center of the search range is selected. For cross- validation the final result is that of the cross-validation with the best hyper parameters. If a classifier is trained from all cases, the grid search for optimal hyper parameters with cross- validation precedes the training.
The segmentor 130 receives ranked valid matches 508 from the re-ranker 204 and segments the anatomical structure of interest in the volumetric image data using one of the ranked valid matches 508 such as a highest ranked match of the ranked valid matches 508.
FIGURE 6 illustrates a variation of the localizer 126. In this example, the localizer 126 includes the ranker 202, the re-ranker 204 and a quality function determiner 602.
The ranker 202 produces a set of scored matches as described in FIGURE 2. The quality function determiner 602 receives the scored matches 206, calculates a quality function for the scored matches 206 and ranks the scored matches 206 according the quality function. A validity classification procedure based on a decision function that combines the additional features 214 and decides the validity of a particular match by the sign of a trained decision function serves as the quality function to produce quality function matches 604. The re-ranker 204 receives the quality function matches, and as described in FIGURE 2 ranks the quality function matches to produce re-ranked quality function matches 606. The re-ranked quality function matches 606 serve as an input for the segmentor 130, as describe in FIGURE 2 and/or as an input for the localization classifier 128 as described in FIGURE 5.
FIGURE 7 illustrates an example method in accordance with an embodiment herein. The ordering of the following acts is for explanatory purposes and is not limiting. As such, one or more of the acts can be performed in a different order, including, but not limited to, concurrently. Furthermore, one or more of the acts may be omitted and/or one or more other acts may be added. At 702, an anatomical structure of interest is detected in the volumetric image data using a shape model representing the anatomical structure of interest, as described herein and/or otherwise, and a plurality of candidate matches where each candidate match includes a set of points of the shape model is generated.
At 704, a match score is generated for the plurality of candidate matches, where each candidate includes a set of points of the shape model that matched corresponding points in the image, as described herein and/or otherwise.
At 706, the validity of a match is determined, as described herein and/or otherwise.
At 708, the matches are ranked based on the validity of the match, as described herein and/or otherwise.
At 710, additional features 214 are applied to the matches to re-rank the matches, as described herein and/or otherwise.
At 712, a validity of a match is determined, as described herein and/or otherwise.
At 714, a validated match is used to segment the anatomical structure of interest from the volumetric image data.
The above may be implemented by way of computer readable instructions, encoded or embedded on computer readable storage medium (which excludes transitory medium), which, when executed by a computer processor(s), cause the processor(s) to carry out the described acts. Additionally or alternatively, at least one of the computer readable instructions is carried by a signal, carrier wave or other transitory medium.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measured cannot be used to advantage. A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.

Claims

1. A system (100), comprising:
a computing system (118), including;
a processor (120); and
a computer readable storage medium (122) with computer readable and executable instructions (124), including a localizer (126) with a re-ranker (204),
wherein the processor is configured to execute the re-ranker instructions, which causes the re-ranker to receive a plurality of ranked or scored candidate matches, wherein a ranked or scored candidate match includes a set of points of a shape model, and rank the ranked or scored candidate matches based on a predetermined set of features; and
wherein the shape model represents an anatomical structure of interest.
2. The system of claim 1, wherein the predetermined set of features includes a scalar feature (208).
3. The system of claim 2, wherein the scalar feature includes a confidence threshold, and the processor ranks the candidate matches based on the confidence threshold.
4. The system of any of claims 2 to 3, wherein the scalar feature includes an offset distance threshold, and the processor ranks the candidate matches based on the offset distance threshold.
5. The system of any of claims 2 to 4, wherein the scalar feature includes an offset gradient threshold, and the processor ranks the candidate matches based on the offset gradient threshold.
6. The system of any of claims 1 to 5, wherein the predetermined set of features includes a histogram feature (210), and the processor ranks the candidate matches by comparing a binned octant occupancy count of the histogram to a threshold for a corresponding bin.
7. The system of any of claims 1 to 6, wherein the predetermined set of features includes a model point weights feature (212), and the processor weights the candidate matches based on the model point weights feature and ranks the weighted candidate matches.
8. The system of any of claims 1 to 7, wherein the instructions further include a localization classifier (128) which when executed causes the processor to determine a validity of the ranked matches.
9. The system of claim 8, wherein the localization classifier comprises a confidence threshold, and the processor determines the validity of the ranked matches based on the confidence threshold.
10. The system of any of claims 8 to 9, wherein the localization classifier comprises a support vector machine, and the processor determines the validity of the ranked matches by calculating a prediction function using the support vector machine, wherein a sign of the function value is used to predict the class.
11. The system of any of claims 8 to 10, wherein the localization classifier comprises a grid searcher, and the processor determines the validity of the ranked matches by searching for optimal parameters and features with the grid searcher.
12. The system of any of claims 1 to 1 1 , wherein the localizer further comprises quality function determiner (602), and the processor rank scored matches by calculating a quality function for the matches with the quality function determiner and ranks the scored matches according the quality function.
13. The system of claim 1, wherein the instructions further include a segmentor (130), which when executed causes the processor to segment the anatomical structure of interest from the volumetric image data.
14. A method, comprising: receiving a plurality of ranked or scored candidate matches , wherein a ranked or scored candidate match includes a set of points of a shape model that represents an anatomical structure of interest;
applying a predetermined set of features to the plurality of ranked or scored candidate matches to rank the scored matches thereby generating ranked matches (224); and
applying a localization classifier to determine a validity of a match in the set of ranked matches to generate ranked valid matches (508).
15. The method of claim 14, wherein the predetermined set of features includes a scalar feature, a histogram feature, and/or a model point weights feature.
16. The method of claim 13 or 14, wherein the localization classifier includes a single confidence threshold classifier, a support vector machine classifier, and/or a grid searcher.
17. The method of any of claims 14 to 16, wherein the method further comprises segmenting the anatomical structure of interest from the volumetric image data based on at least of the one ranked valid matches.
18. The method of claim 17, further comprising:
segmenting the image based on a highest ranked match as determined by the additional features and the localization classifier.
19. The method of any of claims 14 to 18, further comprising:
applying a training procedure used to train the localization classifier.
20. A computer readable medium is encoded executable instructions which when executed by a processor cause the processor to:
receive a plurality of ranked or scored candidate matches, wherein a ranked or scored candidate includes a set of points of a shape model that represents an anatomical structure of interest; apply a predetermined set of features to the scored matches to rank the plurality of ranked or scored candidate matches thereby generating ranked matches; and
apply a localization classifier to determine a validity of a match in the set of ranked matches to generate ranked valid matches.
PCT/EP2018/073077 2017-09-01 2018-08-28 Localization of anatomical structures in medical images WO2019042962A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762553161P 2017-09-01 2017-09-01
US62/553,161 2017-09-01

Publications (1)

Publication Number Publication Date
WO2019042962A1 true WO2019042962A1 (en) 2019-03-07

Family

ID=63491590

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2018/073077 WO2019042962A1 (en) 2017-09-01 2018-08-28 Localization of anatomical structures in medical images

Country Status (1)

Country Link
WO (1) WO2019042962A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009058915A1 (en) * 2007-10-29 2009-05-07 The Trustees Of The University Of Pennsylvania Computer assisted diagnosis (cad) of cancer using multi-functional, multi-modal in-vivo magnetic resonance spectroscopy (mrs) and imaging (mri)
US20130345555A1 (en) * 2005-10-11 2013-12-26 Takeo Kanade Sensor guided catheter navigation system
US20150302602A1 (en) * 2012-12-03 2015-10-22 Koninklijke Philips N.V. Image processing device and method
WO2016049681A1 (en) * 2014-09-29 2016-04-07 Signostics Limited Ultrasound image processing system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130345555A1 (en) * 2005-10-11 2013-12-26 Takeo Kanade Sensor guided catheter navigation system
WO2009058915A1 (en) * 2007-10-29 2009-05-07 The Trustees Of The University Of Pennsylvania Computer assisted diagnosis (cad) of cancer using multi-functional, multi-modal in-vivo magnetic resonance spectroscopy (mrs) and imaging (mri)
US20150302602A1 (en) * 2012-12-03 2015-10-22 Koninklijke Philips N.V. Image processing device and method
WO2016049681A1 (en) * 2014-09-29 2016-04-07 Signostics Limited Ultrasound image processing system and method

Similar Documents

Publication Publication Date Title
US11288808B2 (en) System and method for n-dimensional image segmentation using convolutional neural networks
US11010630B2 (en) Systems and methods for detecting landmark pairs in images
US10489678B2 (en) Image comparison tool tolerant to deformable image matching
Largent et al. Comparison of deep learning-based and patch-based methods for pseudo-CT generation in MRI-based prostate dose planning
Küstner et al. A machine-learning framework for automatic reference-free quality assessment in MRI
Menze et al. The multimodal brain tumor image segmentation benchmark (BRATS)
CN109003267B (en) Computer-implemented method and system for automatically detecting target object from 3D image
Liu et al. Mediastinal lymph node detection and station mapping on chest CT using spatial priors and random forest
US8588519B2 (en) Method and system for training a landmark detector using multiple instance learning
US8837771B2 (en) Method and system for joint multi-organ segmentation in medical image data using local and global context
CN109410188B (en) System and method for segmenting medical images
Mechrez et al. Patch-based segmentation with spatial consistency: application to MS lesions in brain MRI
US9576356B2 (en) Region clustering forest for analyzing medical imaging data
US9367924B2 (en) Method and system for segmentation of the liver in magnetic resonance images using multi-channel features
CN106062782B (en) Unsupervised training for atlas-based registration
KR20140114308A (en) system and method for automatic registration of anatomic points in 3d medical images
KR101645292B1 (en) System and method for automatic planning of two-dimensional views in 3d medical images
US20200349706A1 (en) Method and system for detecting chest x-ray thoracic diseases utilizing multi-view multi-scale learning
CN112529900A (en) Method, device, terminal and storage medium for matching ROI in mammary gland image
US8761480B2 (en) Method and system for vascular landmark detection
WO2023104464A1 (en) Selecting training data for annotation
WO2019042962A1 (en) Localization of anatomical structures in medical images
Agomma et al. Automatic detection of anatomical regions in frontal X-ray images: Comparing convolutional neural networks to random forest
Bin et al. Rapid multimodal medical image registration and fusion in 3D conformal radiotherapy treatment planning
Spanier et al. Automatic atlas-free multiorgan segmentation of contrast-enhanced CT scans

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18765386

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18765386

Country of ref document: EP

Kind code of ref document: A1