WO2017062453A1 - Image segmentation of organs depicted in computed tomography images - Google Patents

Image segmentation of organs depicted in computed tomography images Download PDF

Info

Publication number
WO2017062453A1
WO2017062453A1 PCT/US2016/055495 US2016055495W WO2017062453A1 WO 2017062453 A1 WO2017062453 A1 WO 2017062453A1 US 2016055495 W US2016055495 W US 2016055495W WO 2017062453 A1 WO2017062453 A1 WO 2017062453A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
displacement
organ
training
regression
Prior art date
Application number
PCT/US2016/055495
Other languages
French (fr)
Inventor
Dinggang Shen
Yaozong GAO
Original Assignee
The University Of North Carolina At Chapel Hill
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The University Of North Carolina At Chapel Hill filed Critical The University Of North Carolina At Chapel Hill
Publication of WO2017062453A1 publication Critical patent/WO2017062453A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/143Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30092Stomach; Gastric

Definitions

  • This specification relates generally to computer systems for image segmentation, e.g., image segmentation of organs depicted in computed tomography images.
  • Prostate cancer is a common type of cancer in American men. It is also the second leading cause of cancer death in American men.
  • image guided radiotherapy IGRT
  • IGRT image guided radiotherapy
  • IGRT consists of a planning stage followed by a treatment stage.
  • a computed tomography (CT) scan called a planning CT is acquired from the patient.
  • Radiation oncologists then delineate the target (the prostate and sometimes the seminal vesicles) and nearby organs at risk on the CT scan; frequently this is done manually.
  • a treatment plan is designed with the goal of delivering the prescribed dose to the target volume while sparing nearby healthy organs such as the bladder, rectum and femoral heads.
  • the treatment stage typically lasts about several weeks with typically one treatment per day.
  • a CT scan called a treatment CT can be acquired inside the radiotherapy vault at each treatment day right before the radiation therapy. Since the treatment CT captures a present snapshot of the patient's anatomy, the patient can be set up so that radiation can be aimed at the targeted area as planned.
  • radiation oncology staff can adapt the treatment plan to optimize the distribution of radiation dose to effectively treat current anatomy of the prostate and avoid neighboring normal organs. Consequently, IGRT increases the probability of tumor control and reduces the possibility of side effects.
  • IGRT There are at least two segmentation stages in IGRT, planning-CT segmentation and treatment-CT segmentation.
  • the efficacy of IGRT depends on the accuracy of both segmentations.
  • Planning-CT segmentation aims to accurately segment the target (e.g., prostate) and nearby organs from CT images.
  • Treatment-CT segmentation aims to accurately and quickly localize the prostate in daily treatment CT images.
  • the method includes performing both displacement regression and organ classification on a computed tomography (CT) image of a CT scan of a patient using a shared random forest, resulting in a displacement field for the CT image and an organ classification map for the CT image.
  • CT computed tomography
  • the method includes iteratively repeating the displacement regression and organ classification using, at each iteration, one or more extracted context features from the displacement field and the organ classification map of a prior iteration to refine the displacement field and the organ classification map of a current iteration.
  • the method includes segmenting the CT image to identify a target organ depicted in the CT image.
  • the subject matter described in this specification may be implemented in hardware, software, firmware, or combinations of hardware, software and/or firmware.
  • the subject matter described in this specification may be implemented using a non-transitory computer readable medium storing computer executable instructions that when executed by one or more processors of a computer cause the computer to perform operations.
  • Computer readable media suitable for implementing the subject matter described in this specification include non-transitory computer-readable media, such as disk memory devices, chip memory devices, programmable logic devices, random access memory (RAM), read only memory (ROM), optical read/write memory, cache memory, magnetic read/write memory, flash memory, and application specific integrated circuits.
  • a computer readable medium that implements the subject matter described in this specification may be located on a single device or computing platform or may be distributed across multiple devices or computing platforms.
  • Figure 1A depicts three example CT scans and corresponding pelvic organ segmentations
  • Figure 1 B shows an example system for segmenting CT images
  • Figure 1 C is a block diagram an example image analysis computer system
  • Figure 2 is a flow chart illustrating steps for implementing a regression-based deformable model
  • Figure 3 illustrates jointly performing displacement regression and organ classification by two series of images
  • Figures 4A-B illustrate example Haar-like features
  • Figure 5A illustrates iterative structural refinement of the displacement field
  • Figure 5B illustrates the shape refinement by the auto-context model
  • Figure 6 is a block diagram of the auto-context model with n iterations
  • Figure 7 is a flow diagram of an example method for image segmentation. DETAILED DESCRIPTION
  • Figure 1A depicts three example CT scans and corresponding pelvic organ segmentations.
  • Each column 102a-c corresponds to a different patient, and each row 104a-c shows a different visualization of a CT image.
  • the first row 104a shows an sagittal CT slice.
  • the second row 104b shows the same slice overlaid with segmentations.
  • the third row 104c shows a 3D view of segmentations.
  • Male pelvic organs exhibit low contrast in CT images. For example, the organ boundaries are often indistinct, especially when two nearby organs touch together.
  • the shapes of bladder and rectum are highly variable. They can change significantly across patients due to different amounts of urine in the bladder, and bowel gas in the rectum. 3) The rectum appearance is highly variable due to the uncertainty of bowel gas and filling.
  • Figure 1 B shows an example system 120 for segmenting CT images, e.g., segmenting male pelvic organs from CT images.
  • the system 120 includes an image analysis computer system 122 in communication with a CT scanner 124 configured to perform a CT scan of a patient 126.
  • the image analysis computer system 122 is programmed to use regression- based deformable models to segment male pelvic organs from CT images.
  • Figure 1 C is a block diagram an example image analysis computer system 122.
  • the image analysis computer system 122 includes one or more processors 128 and memory 130 storing one or more executable computer programs for the processors 128.
  • the image analysis computer system 122 can be a laptop or desktop computer system, or the image analysis computer system 122 can be implemented on a server executing on a distributed computing system that hosts cloud computing resources.
  • the image analysis computer system 122 includes a displacement regressor 132 and an organ classifier 134 that share a multi-task random forest 136 to create displacement fields 146 and classification maps 150 from CT images 144.
  • the image analysis computer system 122 includes an iterative refiner that repeats displacement regression and organ classification to refine a displacement field.
  • the image analysis computer system 122 includes an auto-context trainer 140 for training an auto-context model using training data 148.
  • the image analysis computer system 122 includes a hierarchical deformer 142 for segmenting CT images using displacement fields.
  • the image analysis computer system 122 learns a displacement regressor for guiding deformable segmentation.
  • the image analysis computer system 122 can use the underlying image patch of each vertex to estimate a 3D displacement, pointing to the nearest voxel on the target organ boundary, for guiding shape deformation.
  • the estimated displacement provides a non-local external force, thus making the deformable models robust to initialization.
  • the multi-task random forest jointly learns the displacement regressor and organ classifier. Compared to separated random forest regression or classification, the multi-task random forest can improve the accuracy in estimating displacement fields and organ classification maps.
  • An auto- context model can be used as a structural refinement method. The auto- context model can be used to iteratively refine the estimated displacement fields by injecting the structural prior learned from training images.
  • Figure 2 is a flow chart 200 illustrating steps for implementing a regression-based deformable model.
  • a first block 202 shows an example image illustrating an initialization step where a contour overlaid on the image shows the initialized deformable model.
  • a second block 204 illustrates displacement field estimation, including using a multi-task random forest and iterative structural refinement.
  • Third and fourth blocks 206 and 208 show example images illustrating a displacement field as displacement direction and displacement magnitude.
  • a fifth block 210 illustrates regression-based hierarchical deformation, including steps of translation, rigid deformation, affine deformation, and free-form deformation.
  • a sixth block 212 shows an example image illustrating a segmentation step where a contour overlaid on the image shows the final segmentation.
  • the systems described in this specification perform at least two tasks using a shared multi-task random forest, displacement regression and organ classification.
  • the inputs of the two tasks are the same, which is a local patch centered at the query voxel.
  • the difference between the two tasks is the prediction target.
  • Displacement regression aims to predict a 3D displacement pointing from the query voxel to the nearest voxel on the boundary of target organ
  • organ classification aims to predict the label/ likelihood of the query voxel belonging to a specific organ (e.g., bladder).
  • Figure 3 illustrates jointly performing displacement regression and organ classification by two series of images.
  • a first series 302 of images shows displacement regression resulting in a displacement field
  • a second series 304 of images shows organ classification resulting in a classification map.
  • the images show a query voxel 306 and a local patch 308 centered on the query voxel 306.
  • the objective function of the random forest can be modified as follows:
  • f and t are the feature and threshold of one node to be optimized, respectively
  • w i is the weight for each task.
  • G is the gain of each task by using ⁇ f,t ⁇ to split training data arriving at this node into left and right children nodes.
  • the objective function can be further specialized as follows: where v, e, and N are the average variance over displacement vector, the entropy of class label and the number of training samples at the current node, respectively. Symbols with superscript j ⁇ ⁇ L,R ⁇ indicates the same measurements computed after splitting into left or right child node.
  • G regress is the gain of displacement regression, which is measured by variance reduction.
  • G class is the gain of organ classification, which is measured by entropy reduction. Since the magnitudes of variance and entropy reductions are not of the same scale, both magnitudes can be normalized by dividing the average variance and entropy at the root node (Z v and Z e ), respectively.
  • the random forest is configured for both displacement regression and organ classification, which means that the system can use one random forest to jointly predict the 3D displacement and the organ likelihood of a query voxel.
  • the multi-task random forest can utilize the commonality between these two tasks, thus often leading to better performance.
  • the input of the multi-task random forest is a local patch centered at the query voxel.
  • the output is the 3D displacement from the query voxel to the nearest voxel on the boundary of target organ, and the likelihood of the query voxel belonging to the target organ. Since CT images can be very noisy, it may be useful to avoid directly using local intensity patches as features.
  • a common practice is to extract several low-level appearance features from the local patch and use them as input to random forest.
  • Haar-like features can be used because they are robust to noises and can be efficiently computed using the integral image.
  • one-block Haar-like features compute the average intensity at one location within the local patch
  • two-block Haar-like features compute the average intensity difference between two locations within the local patch.
  • / x denotes a local patch centered at voxel x
  • Figures 4A-B illustrate example Haar-like features.
  • Figure 4A shows an image of a local patch 402 including a one-block Haar-like feature 406.
  • Figure 4B shows an image of a local patch 404 including a two-block Haar- like feature having a positive block 408 and a negative block 410.
  • the system trains each binary tree of the random forest independently. For each tree, a bunch of Haar-like features is generated by uniformly and randomly sampling parameters of Haar-like features, , under the constraint that positive and negative blocks should stay inside the local patch. These random Haar-like features are used as feature representation for each training sample/voxel. Since some implementations of the random forest have a built-in feature selection mechanism, it will select the optimal feature set, which can be useful for both displacement regression and organ classification.
  • Figures 5A-B illustrate iterative structural refinement of the displacement field.
  • Figure 5A is a flowchart of iterative refinement that includes example images for purposes of illustration.
  • a CT image 502 taken from a testing CT scan, has a rectangle overlaid on the image to illustrate a local patch of a query point in the center of the rectangle.
  • Appearance features are extracted, and then a first multi-task random forest 504 is used to determine a first result set of a likelihood map and a displacement field 506.
  • Context features are extracted, and then a second multi-task random forest 508 is used to determine a second result set of a likelihood map and displacement field 508.
  • Figure 5B shows example images of shape refinements by the auto- context model, which is described further below.
  • the example images include a first row 522 of images of a prostate during testing, training, and as refined by auto-context.
  • the example images includes a second row 524 of images of a bladder during testing, training, and as refined by auto-context.
  • a multi-task random forest can be used to estimate the 3D displacement field and organ classification/likelihood map by visiting each voxel in the new testing image.
  • the estimated displacement field and organ likelihood map are often noisy, e.g., as shown in the first row of Figure 5A.
  • neighborhood structural information can be considered during voxel-wise prediction.
  • Auto-context is a classification refinement method that builds upon the idea of stacking. It uses an iterative procedure to refine the likelihood map from voxel-wise classification. For purposes of illustration, the following paragraphs explain the auto-context model in the case of binary classification.
  • the auto-context model can be extended for a multi-task random forest.
  • the training of auto-context typically takes several iterations, e.g., 2-3 iterations.
  • a classifier is trained at each iteration.
  • appearance features e.g., Haar-like features
  • the first classifier is trained, it is applied to each training image to generate a likelihood map.
  • the system extracts not only appearance features from the CT image, but also Haar-like features from the tentatively estimated likelihood map. These features from likelihood map are called "context features," because they capture the context information, i.e., neighborhood class information.
  • a second classifier is trained. As the second classifier considers not only CT appearance, but also neighborhood class information, it often leads to a better likelihood map. Given a refined likelihood map by the second classifier, the context features are updated and can be used together with the appearance features to train the third classifier. The same procedure is repeated until the maximal iteration is reached. In some examples, 2-3 iterations are used, e.g., to reduce the total training time.
  • Figure 6 is a block diagram of the auto-context model with n iterations.
  • the learned classifiers 604, 606, and 608 are sequentially applied.
  • the first learned classifier 604 is voxel-wisely applied on the testing image 602 to generate a likelihood map by using only appearance features.
  • the second learned classifier 606 is used to predict a new likelihood map by combining appearance features from CT image 602 with context features from the likelihood map of the previous iteration. This procedure is repeated until all learned classifiers have been applied.
  • the likelihood map 610 output by the last classifier 608 is the output of the auto-context model.
  • the context features encode neighborhood structural information, which can be used to enforce the structural priors learned from training images.
  • Figure 5B illustrates, for two example cases, a binary training image, which represents a shape. Since the classifiers learn the neighborhood structural information of the training shape, the testing shape gradually evolves to be the training shape under the iterative classification.
  • Figure 5B gives the results for two cases, where the training shapes are the prostate and the bladder, respectively.
  • the testing shapes refined by the auto-context are almost identical to the respective training shapes, which indicates that the structural information of training images can be enforced in the testing image by the auto-context model.
  • the system extracts context features (i.e., Haar-like features) not only from likelihood map, but also from the 3D displacement field, as illustrated in Figure 5A. Since context features capture the structural prior of training images, by extracting them from both likelihood map and displacement field, the system can enforce the structural prior on both prediction maps during voxel-wise estimation. As shown in the example images in the bottom row of Figure 5A, the likelihood map and displacement field are significantly improved by the auto-context model.
  • context features i.e., Haar-like features
  • the system can guide a deformable model for organ segmentation.
  • the system can implement a hierarchical deformation strategy.
  • the mean shape model calculated as the average of all training shapes, is first initialized on the center of a testing image (e.g., as illustrated in Figure 2).
  • the shape model is only allowed to translate under the guidance from estimated displacement field. Once it is well positioned, the system can estimate its orientation by allowing it to rigidly rotate. Afterwards, the deformation is further relaxed to the affine transformation for estimating the scaling and shearing parameters of the shape model. Finally, the shape model is freely deformed under the guidance from the displacement field.
  • Algorithm 1 below, provides an example implementation of a regression-based hierarchical deformation strategy.
  • the regression- based deformable models can provide one or more of the following advantages:
  • Non-local deformation Deformation is no longer local in the regression-based deformable models.
  • the non-local deformation eliminates the need to specify the search range, which is often required in the conventional deformable models.
  • shape models which are even initialized far away from the target organ, can still be rapidly deformed to the correct position, under the guidance from the estimated displacement field.
  • Robust to initialization Regression-based deformable models can be robust to arbitrary initializations. In some examples, it is sufficient to initialize the mean shape model on the image center, which is almost impossible for conventional deformable models to work in most applications.
  • Adaptive deformation parameters The deformation direction and step size of each vertex is adaptively determined at each iteration for optimally driving the shape model onto the target organ. This is different from some conventional deformable models, which use a fixed deformation direction (e.g., normal direction) and step size.
  • the adaptive deformation parameters make deformable models fast to converge and also less likely to fall into local minima.
  • regression-based deformable models provide non-local deformations, which make them insensitive to initializations. Besides, many important designs, such as search range, deformation direction, and step size, are either eliminated or automatically determined for each vertex according to the learned displacement regressor. Therefore, regression- based deformable models have much fewer parameters to tune than some conventional deformable models.
  • the segmentation can be conducted in multi-resolution. For example, four resolutions can be used. For purposes of illustration, consider the following example.
  • one multi-task random forest is trained jointly for all five organs.
  • the target displacement to predict is a concatenation of 3D displacements to all five pelvic organs.
  • the entropy in (2) is measured over a multi-class distribution, where each class is a pelvic organ. Joint estimation of all displacement fields can be beneficial to consider the spatial relationship between neighboring organs.
  • one multi-task random forest is trained separately for each organ. Compared to the common random forest in the coarsest resolutions, learning an individual random forest captures specific appearance characteristics of each organ, which is more effective for detail boundary refinement of the shape model.
  • the testing image is first down-sampled to the coarsest resolution (e.g., voxel size 8 x 8 x 8 mm 3 ), where rough segmentations of the five organs can be rapidly obtained by the system. These segmentations serve as good initializations for the next finer resolution. The segmentation is then sequentially performed across different resolutions, until it reaches the finest resolution (e.g., voxel size 1 x 1 x 1 mm 3 ), where the final segmentation is achieved.
  • the coarsest resolution e.g., voxel size 8 x 8 x 8 mm 3
  • finest resolution e.g., voxel size 1 x 1 x 1 mm 3
  • the multi-resolution strategy can provide one or more of the following benefits. Instead of computing the displacement field for the whole image in the fine resolution, now only a sub-region of the displacement field around the initialization (given by the previous coarser resolution) has to be computed, thus significantly improving the efficiency. Apart from the efficiency, the robustness also benefits from the joint estimation of displacement fields of different organs, since the spatial relationship between organs is implicitly considered.
  • Figure 7 is a flow diagram of an example method 700 for image segmentation.
  • the method 700 can be performed by the image analysis computer system 122 of Figures 1 B-C.
  • the method 700 includes performing both displacement regression and organ classification on a computed tomography (CT) image of a CT scan of a patient using a shared random forest, resulting in a displacement field for the CT image and an organ classification map for the CT image (702).
  • Performing displacement regression can include, for each voxel of the CT scan within the CT image, predicting a 3D displacement pointing from the voxel to a nearest voxel on a boundary of the target organ and adding the 3D displacement to the displacement field.
  • Performing organ classification can include, for each voxel of the CT scan within the CT image, determining an organ likelihood for the voxel belonging to a target organ.
  • the method 700 includes iteratively repeating the displacement regression and organ classification using, at each iteration, one or more extracted context features from the displacement field and the organ classification map of a prior iteration to refine the displacement field and the organ classification map of a current iteration (704). Iteratively repeating the displacement regression and organ classification can include, at each iteration, applying a respective trained classifier for the iteration trained using training voxels. Iteratively repeating the displacement regression and organ classification can include training an auto-context model comprising the trained classifiers for each iteration.
  • training the auto-context model can include training a first classifier using one or more extracted appearance features and the training voxels, training a second classifier using context features identified in training the first classifier, and training a third classifier using updated context features from training the second classifier and the extracted appearances features.
  • the method 700 includes segmenting the CT image using the displacement field to identify a target organ depicted in the CT image (706). Segmenting the CT image can include guiding a deformable model for organ segmentation using the displacement field by displacement regression. Guiding the deformable model can include guiding the deformable model differently during each phase of hierarchical deformation.
  • guiding the deformable model can include, during a first phase, only allowing the deformable model to translate using the displacement field; during a second phase estimating the orientation of the deformable model by allowing the deformable model to rigidly rotate; during a third phase, estimating one or more scaling and shearing parameters of the deformable model using an affine translation; and, during a fourth phase, freely deforming the deformable model using the displacement field.
  • Guiding the deformable model can include initializing a mean shape model on an image center of the CT image.

Abstract

Methods, systems, and non-transitory computer storage media for image segmentation are disclosed. In some examples, the method includes performing both displacement regression and organ classification on a computed tomography (CT) image of a CT scan of a patient using a shared random forest, resulting in a displacement field for the CT image and an organ classification map for the CT image. The method includes iteratively repeating the displacement regression and organ classification using, at each iteration, one or more extracted context features from the displacement field and the organ classification map of a prior iteration to refine the displacement field and the organ classification map of a current iteration. The method includes segmenting the CT image to identify a target organ depicted in the CT image.

Description

IMAGE SEGMENTATION OF ORGANS DEPICTED IN COMPUTED TOMOGRAPHY IMAGES
GOVERNMENT INTEREST
This invention was made with government support under grant number CA140413 awarded by the National Institutes of Health (NIH). The government has certain rights to this invention.
PRIORITY CLAIM
This application claims the benefit of U.S. Provisional Application Serial No. 62/237,506, filed October 5, 2015, the disclosure of which is incorporated herein by reference in its entirety.
TECHNICAL FIELD
This specification relates generally to computer systems for image segmentation, e.g., image segmentation of organs depicted in computed tomography images. BACKGROUND
Prostate cancer is a common type of cancer in American men. It is also the second leading cause of cancer death in American men. When a patient is diagnosed with prostate cancer in the early stage, image guided radiotherapy (IGRT) is usually recommended as one of the effective treatments for prostate cancer. IGRT consists of a planning stage followed by a treatment stage. In the planning stage, a computed tomography (CT) scan called a planning CT is acquired from the patient. Radiation oncologists then delineate the target (the prostate and sometimes the seminal vesicles) and nearby organs at risk on the CT scan; frequently this is done manually.
Based on the organ delineations, a treatment plan is designed with the goal of delivering the prescribed dose to the target volume while sparing nearby healthy organs such as the bladder, rectum and femoral heads. The treatment stage typically lasts about several weeks with typically one treatment per day. To account for daily prostate motions, a CT scan called a treatment CT can be acquired inside the radiotherapy vault at each treatment day right before the radiation therapy. Since the treatment CT captures a present snapshot of the patient's anatomy, the patient can be set up so that radiation can be aimed at the targeted area as planned. In addition, if the change of anatomy is significant, radiation oncology staff can adapt the treatment plan to optimize the distribution of radiation dose to effectively treat current anatomy of the prostate and avoid neighboring normal organs. Consequently, IGRT increases the probability of tumor control and reduces the possibility of side effects.
There are at least two segmentation stages in IGRT, planning-CT segmentation and treatment-CT segmentation. The efficacy of IGRT depends on the accuracy of both segmentations. Planning-CT segmentation aims to accurately segment the target (e.g., prostate) and nearby organs from CT images. Treatment-CT segmentation aims to accurately and quickly localize the prostate in daily treatment CT images.
SUMMARY
Methods, systems, and non-transitory computer storage media for image segmentation are disclosed. In some examples, the method includes performing both displacement regression and organ classification on a computed tomography (CT) image of a CT scan of a patient using a shared random forest, resulting in a displacement field for the CT image and an organ classification map for the CT image. The method includes iteratively repeating the displacement regression and organ classification using, at each iteration, one or more extracted context features from the displacement field and the organ classification map of a prior iteration to refine the displacement field and the organ classification map of a current iteration. The method includes segmenting the CT image to identify a target organ depicted in the CT image.
The subject matter described in this specification may be implemented in hardware, software, firmware, or combinations of hardware, software and/or firmware. In some examples, the subject matter described in this specification may be implemented using a non-transitory computer readable medium storing computer executable instructions that when executed by one or more processors of a computer cause the computer to perform operations. Computer readable media suitable for implementing the subject matter described in this specification include non-transitory computer-readable media, such as disk memory devices, chip memory devices, programmable logic devices, random access memory (RAM), read only memory (ROM), optical read/write memory, cache memory, magnetic read/write memory, flash memory, and application specific integrated circuits. In addition, a computer readable medium that implements the subject matter described in this specification may be located on a single device or computing platform or may be distributed across multiple devices or computing platforms.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1A depicts three example CT scans and corresponding pelvic organ segmentations;
Figure 1 B shows an example system for segmenting CT images; Figure 1 C is a block diagram an example image analysis computer system;
Figure 2 is a flow chart illustrating steps for implementing a regression-based deformable model;
Figure 3 illustrates jointly performing displacement regression and organ classification by two series of images;
Figures 4A-B illustrate example Haar-like features;
Figure 5A illustrates iterative structural refinement of the displacement field;
Figure 5B illustrates the shape refinement by the auto-context model; Figure 6 is a block diagram of the auto-context model with n iterations; and
Figure 7 is a flow diagram of an example method for image segmentation. DETAILED DESCRIPTION
Figure 1A depicts three example CT scans and corresponding pelvic organ segmentations. Each column 102a-c corresponds to a different patient, and each row 104a-c shows a different visualization of a CT image. The first row 104a shows an sagittal CT slice. The second row 104b shows the same slice overlaid with segmentations. The third row 104c shows a 3D view of segmentations.
It is generally difficult to automatically segment male pelvic organs from CT images due to three challenges. 1) Male pelvic organs exhibit low contrast in CT images. For example, the organ boundaries are often indistinct, especially when two nearby organs touch together. 2) The shapes of bladder and rectum are highly variable. They can change significantly across patients due to different amounts of urine in the bladder, and bowel gas in the rectum. 3) The rectum appearance is highly variable due to the uncertainty of bowel gas and filling.
Figure 1 B shows an example system 120 for segmenting CT images, e.g., segmenting male pelvic organs from CT images. The system 120 includes an image analysis computer system 122 in communication with a CT scanner 124 configured to perform a CT scan of a patient 126. The image analysis computer system 122 is programmed to use regression- based deformable models to segment male pelvic organs from CT images.
Figure 1 C is a block diagram an example image analysis computer system 122. The image analysis computer system 122 includes one or more processors 128 and memory 130 storing one or more executable computer programs for the processors 128. For example, the image analysis computer system 122 can be a laptop or desktop computer system, or the image analysis computer system 122 can be implemented on a server executing on a distributed computing system that hosts cloud computing resources.
The image analysis computer system 122 includes a displacement regressor 132 and an organ classifier 134 that share a multi-task random forest 136 to create displacement fields 146 and classification maps 150 from CT images 144. The image analysis computer system 122 includes an iterative refiner that repeats displacement regression and organ classification to refine a displacement field. The image analysis computer system 122 includes an auto-context trainer 140 for training an auto-context model using training data 148. The image analysis computer system 122 includes a hierarchical deformer 142 for segmenting CT images using displacement fields.
In operation, the image analysis computer system 122 learns a displacement regressor for guiding deformable segmentation. The image analysis computer system 122 can use the underlying image patch of each vertex to estimate a 3D displacement, pointing to the nearest voxel on the target organ boundary, for guiding shape deformation. Compared to some types of conventional local search, the estimated displacement provides a non-local external force, thus making the deformable models robust to initialization.
The multi-task random forest jointly learns the displacement regressor and organ classifier. Compared to separated random forest regression or classification, the multi-task random forest can improve the accuracy in estimating displacement fields and organ classification maps. An auto- context model can be used as a structural refinement method. The auto- context model can be used to iteratively refine the estimated displacement fields by injecting the structural prior learned from training images.
Figure 2 is a flow chart 200 illustrating steps for implementing a regression-based deformable model. A first block 202 shows an example image illustrating an initialization step where a contour overlaid on the image shows the initialized deformable model. A second block 204 illustrates displacement field estimation, including using a multi-task random forest and iterative structural refinement. Third and fourth blocks 206 and 208 show example images illustrating a displacement field as displacement direction and displacement magnitude. A fifth block 210 illustrates regression-based hierarchical deformation, including steps of translation, rigid deformation, affine deformation, and free-form deformation. A sixth block 212 shows an example image illustrating a segmentation step where a contour overlaid on the image shows the final segmentation.
Conventional random forests focus on only one task, such as regression or classification. Learning related tasks in parallel using a shared representation can improve the generalization of the learned model. Moreover, what is learned for each task can aid in learning other tasks.
The systems described in this specification perform at least two tasks using a shared multi-task random forest, displacement regression and organ classification. The inputs of the two tasks are the same, which is a local patch centered at the query voxel. The difference between the two tasks is the prediction target. Displacement regression aims to predict a 3D displacement pointing from the query voxel to the nearest voxel on the boundary of target organ, while organ classification aims to predict the label/ likelihood of the query voxel belonging to a specific organ (e.g., bladder). These two tasks are highly related, since the displacement field and the classification map are two representations of an organ boundary.
Figure 3 illustrates jointly performing displacement regression and organ classification by two series of images. A first series 302 of images shows displacement regression resulting in a displacement field, and a second series 304 of images shows organ classification resulting in a classification map. The images show a query voxel 306 and a local patch 308 centered on the query voxel 306.
To adapt a random forest for multi-task learning, the objective function of the random forest can be modified as follows:
Figure imgf000007_0001
where f and t are the feature and threshold of one node to be optimized, respectively, wi is the weight for each task. G, is the gain of each task by using { f,t } to split training data arriving at this node into left and right children nodes. When considering only two tasks, i.e., displacement regression and organ classification, the objective function can be further specialized as follows:
Figure imgf000008_0001
where v, e, and N are the average variance over displacement vector, the entropy of class label and the number of training samples at the current node, respectively. Symbols with superscript j ε { L,R } indicates the same measurements computed after splitting into left or right child node. Gregress is the gain of displacement regression, which is measured by variance reduction. Gclass is the gain of organ classification, which is measured by entropy reduction. Since the magnitudes of variance and entropy reductions are not of the same scale, both magnitudes can be normalized by dividing the average variance and entropy at the root node (Zv and Ze), respectively.
With this modification (2), the random forest is configured for both displacement regression and organ classification, which means that the system can use one random forest to jointly predict the 3D displacement and the organ likelihood of a query voxel. Compared to separate displacement regression and organ classification by a conventional random forest, the multi-task random forest can utilize the commonality between these two tasks, thus often leading to better performance.
The input of the multi-task random forest is a local patch centered at the query voxel. The output is the 3D displacement from the query voxel to the nearest voxel on the boundary of target organ, and the likelihood of the query voxel belonging to the target organ. Since CT images can be very noisy, it may be useful to avoid directly using local intensity patches as features. A common practice is to extract several low-level appearance features from the local patch and use them as input to random forest. Among various features, Haar-like features can be used because they are robust to noises and can be efficiently computed using the integral image.
For example, consider the following two types of Haar-like features: 1 ) one-block Haar-like features compute the average intensity at one location within the local patch; 2) two-block Haar-like features compute the average intensity difference between two locations within the local patch. The mathematical definitions of these features can be formulated as follows:
Figure imgf000009_0001
where /x denotes a local patch centered at voxel x,
Figure imgf000009_0002
denotes one Haar-like feature with parameters where
Figure imgf000009_0003
Figure imgf000009_0004
and
Figure imgf000009_0005
are the center and size of the positive block, respectively, and
Figure imgf000009_0006
and s2 are the center and size of the negative block, respectively,
Figure imgf000009_0007
} is a switch between two types of Haar-like features. When
Figure imgf000009_0008
becomes one-block Haar-like features. When λ = 1 , (3) becomes two-block Haar-like features.
Figures 4A-B illustrate example Haar-like features. Figure 4A shows an image of a local patch 402 including a one-block Haar-like feature 406. Figure 4B shows an image of a local patch 404 including a two-block Haar- like feature having a positive block 408 and a negative block 410.
In the training stage, the system trains each binary tree of the random forest independently. For each tree, a bunch of Haar-like features is generated by uniformly and randomly sampling parameters of Haar-like features,
Figure imgf000009_0009
, under the constraint that positive and negative blocks should stay inside the local patch. These random Haar-like features are used as feature representation for each training sample/voxel. Since some implementations of the random forest have a built-in feature selection mechanism, it will select the optimal feature set, which can be useful for both displacement regression and organ classification.
Figures 5A-B illustrate iterative structural refinement of the displacement field. Figure 5A is a flowchart of iterative refinement that includes example images for purposes of illustration. A CT image 502, taken from a testing CT scan, has a rectangle overlaid on the image to illustrate a local patch of a query point in the center of the rectangle. Appearance features are extracted, and then a first multi-task random forest 504 is used to determine a first result set of a likelihood map and a displacement field 506. Context features are extracted, and then a second multi-task random forest 508 is used to determine a second result set of a likelihood map and displacement field 508.
Figure 5B shows example images of shape refinements by the auto- context model, which is described further below. The example images include a first row 522 of images of a prostate during testing, training, and as refined by auto-context. The example images includes a second row 524 of images of a bladder during testing, training, and as refined by auto-context.
Given a new testing image, a multi-task random forest can be used to estimate the 3D displacement field and organ classification/likelihood map by visiting each voxel in the new testing image. However, since both the displacement and organ likelihood of each voxel are predicted independently from those of nearby voxels, the estimated displacement field and organ likelihood map are often noisy, e.g., as shown in the first row of Figure 5A. To overcome this drawback, neighborhood structural information can be considered during voxel-wise prediction.
Auto-context is a classification refinement method that builds upon the idea of stacking. It uses an iterative procedure to refine the likelihood map from voxel-wise classification. For purposes of illustration, the following paragraphs explain the auto-context model in the case of binary classification. The auto-context model can be extended for a multi-task random forest.
The training of auto-context typically takes several iterations, e.g., 2-3 iterations. A classifier is trained at each iteration. In the first iteration, appearance features (e.g., Haar-like features) are extracted from the CT image and used together with the ground-truth label of each training voxel to train the first classifier. Once the first classifier is trained, it is applied to each training image to generate a likelihood map. In the second iteration, for each training voxel, the system extracts not only appearance features from the CT image, but also Haar-like features from the tentatively estimated likelihood map. These features from likelihood map are called "context features," because they capture the context information, i.e., neighborhood class information.
With the introduction of new features, a second classifier is trained. As the second classifier considers not only CT appearance, but also neighborhood class information, it often leads to a better likelihood map. Given a refined likelihood map by the second classifier, the context features are updated and can be used together with the appearance features to train the third classifier. The same procedure is repeated until the maximal iteration is reached. In some examples, 2-3 iterations are used, e.g., to reduce the total training time.
Figure 6 is a block diagram of the auto-context model with n iterations. Given a new testing CT image 602, the learned classifiers 604, 606, and 608 are sequentially applied. In the first iteration, the first learned classifier 604 is voxel-wisely applied on the testing image 602 to generate a likelihood map by using only appearance features. In the second iteration, the second learned classifier 606 is used to predict a new likelihood map by combining appearance features from CT image 602 with context features from the likelihood map of the previous iteration. This procedure is repeated until all learned classifiers have been applied. The likelihood map 610 output by the last classifier 608 is the output of the auto-context model.
The context features encode neighborhood structural information, which can be used to enforce the structural priors learned from training images. For example, Figure 5B illustrates, for two example cases, a binary training image, which represents a shape. Since the classifiers learn the neighborhood structural information of the training shape, the testing shape gradually evolves to be the training shape under the iterative classification. Figure 5B gives the results for two cases, where the training shapes are the prostate and the bladder, respectively. The testing shapes refined by the auto-context are almost identical to the respective training shapes, which indicates that the structural information of training images can be enforced in the testing image by the auto-context model.
To extend the auto-context model for multi-task random forest, the system extracts context features (i.e., Haar-like features) not only from likelihood map, but also from the 3D displacement field, as illustrated in Figure 5A. Since context features capture the structural prior of training images, by extracting them from both likelihood map and displacement field, the system can enforce the structural prior on both prediction maps during voxel-wise estimation. As shown in the example images in the bottom row of Figure 5A, the likelihood map and displacement field are significantly improved by the auto-context model.
Given a 3D displacement field estimated by a multi-task random forest with auto-context, the system can guide a deformable model for organ segmentation. The system can implement a hierarchical deformation strategy. To start segmentation, the mean shape model, calculated as the average of all training shapes, is first initialized on the center of a testing image (e.g., as illustrated in Figure 2).
During deformable segmentation, initially the shape model is only allowed to translate under the guidance from estimated displacement field. Once it is well positioned, the system can estimate its orientation by allowing it to rigidly rotate. Afterwards, the deformation is further relaxed to the affine transformation for estimating the scaling and shearing parameters of the shape model. Finally, the shape model is freely deformed under the guidance from the displacement field. Algorithm 1 , below, provides an example implementation of a regression-based hierarchical deformation strategy.
Figure imgf000013_0001
Compared to some conventional deformable models, the regression- based deformable models can provide one or more of the following advantages:
• Non-local deformation: Deformation is no longer local in the regression-based deformable models. The non-local deformation eliminates the need to specify the search range, which is often required in the conventional deformable models. On the other hand, shape models, which are even initialized far away from the target organ, can still be rapidly deformed to the correct position, under the guidance from the estimated displacement field. • Robust to initialization: Regression-based deformable models can be robust to arbitrary initializations. In some examples, it is sufficient to initialize the mean shape model on the image center, which is almost impossible for conventional deformable models to work in most applications.
• Adaptive deformation parameters: The deformation direction and step size of each vertex is adaptively determined at each iteration for optimally driving the shape model onto the target organ. This is different from some conventional deformable models, which use a fixed deformation direction (e.g., normal direction) and step size.
The adaptive deformation parameters make deformable models fast to converge and also less likely to fall into local minima.
In summary, regression-based deformable models provide non-local deformations, which make them insensitive to initializations. Besides, many important designs, such as search range, deformation direction, and step size, are either eliminated or automatically determined for each vertex according to the learned displacement regressor. Therefore, regression- based deformable models have much fewer parameters to tune than some conventional deformable models.
To improve the efficiency and robustness of the system, the segmentation can be conducted in multi-resolution. For example, four resolutions can be used. For purposes of illustration, consider the following example.
In a training phase, for the two coarsest resolutions, one multi-task random forest is trained jointly for all five organs. Specifically, the target displacement to predict is a concatenation of 3D displacements to all five pelvic organs. The entropy in (2) is measured over a multi-class distribution, where each class is a pelvic organ. Joint estimation of all displacement fields can be beneficial to consider the spatial relationship between neighboring organs. In the two finest resolutions, one multi-task random forest is trained separately for each organ. Compared to the common random forest in the coarsest resolutions, learning an individual random forest captures specific appearance characteristics of each organ, which is more effective for detail boundary refinement of the shape model.
In a testing phase, the testing image is first down-sampled to the coarsest resolution (e.g., voxel size 8 x 8 x 8 mm3), where rough segmentations of the five organs can be rapidly obtained by the system. These segmentations serve as good initializations for the next finer resolution. The segmentation is then sequentially performed across different resolutions, until it reaches the finest resolution (e.g., voxel size 1 x 1 x 1 mm3), where the final segmentation is achieved.
The multi-resolution strategy can provide one or more of the following benefits. Instead of computing the displacement field for the whole image in the fine resolution, now only a sub-region of the displacement field around the initialization (given by the previous coarser resolution) has to be computed, thus significantly improving the efficiency. Apart from the efficiency, the robustness also benefits from the joint estimation of displacement fields of different organs, since the spatial relationship between organs is implicitly considered.
Figure 7 is a flow diagram of an example method 700 for image segmentation. The method 700 can be performed by the image analysis computer system 122 of Figures 1 B-C.
The method 700 includes performing both displacement regression and organ classification on a computed tomography (CT) image of a CT scan of a patient using a shared random forest, resulting in a displacement field for the CT image and an organ classification map for the CT image (702). Performing displacement regression can include, for each voxel of the CT scan within the CT image, predicting a 3D displacement pointing from the voxel to a nearest voxel on a boundary of the target organ and adding the 3D displacement to the displacement field. Performing organ classification can include, for each voxel of the CT scan within the CT image, determining an organ likelihood for the voxel belonging to a target organ.
The method 700 includes iteratively repeating the displacement regression and organ classification using, at each iteration, one or more extracted context features from the displacement field and the organ classification map of a prior iteration to refine the displacement field and the organ classification map of a current iteration (704). Iteratively repeating the displacement regression and organ classification can include, at each iteration, applying a respective trained classifier for the iteration trained using training voxels. Iteratively repeating the displacement regression and organ classification can include training an auto-context model comprising the trained classifiers for each iteration. For example, training the auto-context model can include training a first classifier using one or more extracted appearance features and the training voxels, training a second classifier using context features identified in training the first classifier, and training a third classifier using updated context features from training the second classifier and the extracted appearances features.
The method 700 includes segmenting the CT image using the displacement field to identify a target organ depicted in the CT image (706). Segmenting the CT image can include guiding a deformable model for organ segmentation using the displacement field by displacement regression. Guiding the deformable model can include guiding the deformable model differently during each phase of hierarchical deformation. For example, guiding the deformable model can include, during a first phase, only allowing the deformable model to translate using the displacement field; during a second phase estimating the orientation of the deformable model by allowing the deformable model to rigidly rotate; during a third phase, estimating one or more scaling and shearing parameters of the deformable model using an affine translation; and, during a fourth phase, freely deforming the deformable model using the displacement field. Guiding the deformable model can include initializing a mean shape model on an image center of the CT image.
It is understood that various details of the presently disclosed subject matter may be changed without departing from the scope of the presently disclosed subject matter. Furthermore, the foregoing description is for the purpose of illustration only, and not for the purpose of limitation.

Claims

CLAIMS What is claimed is:
1 . A method for image segmentation, the method comprising:
performing both displacement regression and organ classification on a computed tomography (CT) image of a CT scan of a patient using a shared random forest, resulting in a displacement field for the CT image and an organ classification map for the CT image;
iteratively repeating the displacement regression and organ classification using, at each iteration of a plurality of iterations, one or more extracted context features from the displacement field and the organ classification map of a prior iteration to refine the displacement field and the organ classification map of a current iteration; and
segmenting the CT image using the displacement field to identify a target organ depicted in the CT image.
2. The method of claim 1 , wherein performing displacement regression comprises, for each voxel of a plurality of voxels of the CT scan within the CT image, predicting a 3D displacement pointing from the voxel to a nearest voxel on a boundary of the target organ and adding the 3D displacement to the displacement field.
3. The method of claim 1 , wherein performing organ classification comprises, for each voxel of a plurality of voxels of the CT scan within the CT image, determining an organ likelihood for the voxel belonging to a target organ and adding the likelihood to the organ classification map.
4. The method of claim 1 , wherein iteratively repeating the displacement regression and organ classification comprises, at each iteration, applying a respective trained classifier for the iteration trained using a respective plurality of training voxels.
5. The method of claim 4, wherein iteratively repeating the displacement regression and organ classification comprises training an auto-context model comprising the trained classifiers for each iteration.
6. The method of claim 5, wherein training the auto-context model comprises training a first classifier using one or more extracted appearance features and the plurality of training voxels, training a second classifier using context features identified in training the first classifier, and training a third classifier using updated context features from training the second classifier and the extracted appearances features.
7. The method of claim 1 , wherein segmenting the CT image comprises guiding a deformable model for organ segmentation using the displacement field by displacement regression.
8. The method of claim 7, wherein guiding the deformable model comprises guiding the deformable model differently during each phase of a plurality of phases of hierarchical deformation.
9. The method of claim 8, wherein guiding the deformable model comprises, during a first phase, only allowing the deformable model to translate using the displacement field; during a second phase estimating an orientation of the deformable model by allowing the deformable model to rigidly rotate; during a third phase, estimating one or more scaling and shearing parameters of the deformable model using an affine translation; and, during a fourth phase, freely deforming the deformable model using the displacement field.
10. The method of claim 8, wherein guiding the deformable model comprises initializing a mean shape model on an image center of the CT image.
1 1 . A system comprising:
at least one processor; and
memory storing one or more computer programs executable by the at least one processor for:
performing both displacement regression and organ classification on a computed tomography (CT) image of a CT scan of a patient using a shared random forest, resulting in a displacement field for the CT image and an organ classification map for the CT image;
iteratively repeating the displacement regression and organ classification using, at each iteration of a plurality of iterations, one or more extracted context features from the displacement field and the organ classification map of a prior iteration to refine the displacement field and the organ classification map of a current iteration; and
segmenting the CT image using the displacement field to identify a target organ depicted in the CT image.
12. The system of claim 1 1 , wherein performing displacement regression comprises, for each voxel of a plurality of voxels of the CT scan within the CT image, predicting a 3D displacement pointing from the voxel to a nearest voxel on a boundary of the target organ and adding the 3D displacement to the displacement field.
13. The system of claim 1 1 , wherein performing organ classification comprises, for each voxel of a plurality of voxels of the CT scan within the CT image, determining an organ likelihood for the voxel belonging to a target organ.
14. The system of claim 1 1 , wherein iteratively repeating the displacement regression and organ classification comprises, at each iteration, applying a respective trained classifier for the iteration trained using a respective plurality of training voxels.
15. The system of claim 14, wherein iteratively repeating the displacement regression and organ classification comprises training an auto- context model comprising the trained classifiers for each iteration.
16. The system of claim 15, wherein training the auto-context model comprises training a first classifier using one or more extracted appearance features and the plurality of training voxels, training a second classifier using context features identified in training the first classifier, and training a third classifier using updated context features from training the second classifier and the extracted appearances features.
17. The system of claim 1 1 , wherein segmenting the CT image comprises guiding a deformable model for organ segmentation using the displacement field by displacement regression.
18. The system of claim 17, wherein guiding the deformable model comprises guiding the deformable model differently during each phase of a plurality of phases of hierarchical deformation.
19. The system of claim 18, wherein guiding the deformable model comprises, during a first phase, only allowing the deformable model to translate using the displacement field; during a second phase estimating an orientation of the deformable model by allowing the deformable model to rigidly rotate; during a third phase, estimating one or more scaling and shearing parameters of the deformable model using an affine translation; and, during a fourth phase, freely deforming the deformable model using the displacement field.
20. The system of claim 18, wherein guiding the deformable model comprises initializing a mean shape model on an image center of the CT image.
21 . A non-transitory computer storage medium storing instructions for at least one processor to perform operations comprising:
performing both displacement regression and organ classification on a computed tomography (CT) image of a CT scan of a patient using a shared random forest, resulting in a displacement field for the CT image and an organ classification map for the CT image;
iteratively repeating the displacement regression and organ classification using, at each iteration of a plurality of iterations, one or more extracted context features from the displacement field and the organ classification map of a prior iteration to refine the displacement field and the organ classification map of a current iteration; and
segmenting the CT image using the displacement field to identify a target organ depicted in the CT image.
PCT/US2016/055495 2015-10-05 2016-10-05 Image segmentation of organs depicted in computed tomography images WO2017062453A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201562237506P 2015-10-05 2015-10-05
US62/237,506 2015-10-05

Publications (1)

Publication Number Publication Date
WO2017062453A1 true WO2017062453A1 (en) 2017-04-13

Family

ID=58488451

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2016/055495 WO2017062453A1 (en) 2015-10-05 2016-10-05 Image segmentation of organs depicted in computed tomography images

Country Status (1)

Country Link
WO (1) WO2017062453A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392918A (en) * 2017-06-22 2017-11-24 苏州大学 OCT image layer dividing method based on random forest Yu composite reactive curve
KR101856426B1 (en) * 2017-11-27 2018-06-20 세종대학교산학협력단 3D geometry enhancement method and apparatus thereof
CN109410217A (en) * 2018-09-26 2019-03-01 广东毅达医疗科技股份有限公司 A kind of method, apparatus and computer readable storage medium of image segmentation
CN112598634A (en) * 2020-12-18 2021-04-02 燕山大学 CT image organ positioning method based on 3D CNN and iterative search
CN116452559A (en) * 2023-04-19 2023-07-18 深圳市睿法生物科技有限公司 Tumor focus positioning method and device based on ctDNA fragmentation mode

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YAOZONG GAO ET AL.: "Accurate Segmentation of CT Male Pelvic Organs via Regression-Based Deformable Models and Multi-Task Random Forests", IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 35, no. 6, June 2016 (2016-06-01), pages 1532 - 1543, XP011612566 *
YEQIN SHAO ET AL.: "Locally-constrained Boundary Regression for Segmentation of Prostate and Rectum in the Planning CT Images", MEDICAL IMAGE ANALYSIS, vol. 26, no. 1, 2 October 2015 (2015-10-02), pages 345 - 356, XP055376423 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392918A (en) * 2017-06-22 2017-11-24 苏州大学 OCT image layer dividing method based on random forest Yu composite reactive curve
KR101856426B1 (en) * 2017-11-27 2018-06-20 세종대학교산학협력단 3D geometry enhancement method and apparatus thereof
US10403038B2 (en) 2017-11-27 2019-09-03 Industry Academy Cooperation Foundation Of Sejong University 3D geometry enhancement method and apparatus therefor
CN109410217A (en) * 2018-09-26 2019-03-01 广东毅达医疗科技股份有限公司 A kind of method, apparatus and computer readable storage medium of image segmentation
CN112598634A (en) * 2020-12-18 2021-04-02 燕山大学 CT image organ positioning method based on 3D CNN and iterative search
CN116452559A (en) * 2023-04-19 2023-07-18 深圳市睿法生物科技有限公司 Tumor focus positioning method and device based on ctDNA fragmentation mode
CN116452559B (en) * 2023-04-19 2024-02-20 深圳市睿法生物科技有限公司 Tumor focus positioning method and device based on ctDNA fragmentation mode

Similar Documents

Publication Publication Date Title
Milletari et al. V-net: Fully convolutional neural networks for volumetric medical image segmentation
EP3195257B1 (en) Systems and methods for segmenting medical images based on anatomical landmark-based features
CN107403446B (en) Method and system for image registration using intelligent artificial agents
EP3504680B1 (en) Systems and methods for image segmentation using convolutional neural network
EP3488381B1 (en) Method and system for artificial intelligence based medical image segmentation
US10147185B2 (en) Interactive segmentation
Parisot et al. Concurrent tumor segmentation and registration with uncertainty-based sparse non-uniform graphs
US10169873B2 (en) Weakly supervised probabilistic atlas generation through multi-atlas label fusion
Mesejo et al. Biomedical image segmentation using geometric deformable models and metaheuristics
Erdt et al. Regmentation: A new view of image segmentation and registration
CN106340021B (en) Blood vessel extraction method
US8150119B2 (en) Method and system for left ventricle endocardium surface segmentation using constrained optimal mesh smoothing
CN111105424A (en) Lymph node automatic delineation method and device
WO2017062453A1 (en) Image segmentation of organs depicted in computed tomography images
US9269156B2 (en) Method and system for automatic prostate segmentation in magnetic resonance images
Shao et al. Locally-constrained boundary regression for segmentation of prostate and rectum in the planning CT images
Martínez et al. Segmentation of pelvic structures for planning CT using a geometrical shape model tuned by a multi-scale edge detector
Li et al. Learning image context for segmentation of the prostate in CT-guided radiotherapy
Lesage et al. Adaptive particle filtering for coronary artery segmentation from 3D CT angiograms
RU2654199C1 (en) Segmentation of human tissues in computer image
Nouranian et al. A multi-atlas-based segmentation framework for prostate brachytherapy
EP4030385A1 (en) Devices and process for synthesizing images from a source nature to a target nature
Lahoti et al. Whole Tumor Segmentation from Brain MR images using Multi-view 2D Convolutional Neural Network
Moschidis et al. Automatic differential segmentation of the prostate in 3-D MRI using Random Forest classification and graph-cuts optimization
Kurugol et al. Centerline extraction with principal curve tracing to improve 3D level set esophagus segmentation in CT images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16854213

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 23.07.2018)

122 Ep: pct application non-entry in european phase

Ref document number: 16854213

Country of ref document: EP

Kind code of ref document: A1